name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
604170 | The consensus operator for combining beliefs. | The consensus operator provides a method for combining possibly conflicting beliefs within the Dempster-Shafer belief theory, and represents an alternative to the traditional Dempster's rule. This paper describes how the consensus operator can be applied to dogmatic conflicting opinions, i.e., when the degree of conflict is very high. It overcomes shortcomings of Dempster's rule and other operators that have been proposed for combining possibly conflicting beliefs. | Introduction
Ever since the publication of Shafer's book A Mathematical Theory of Evidence [1]
there has been continuous controversy around the so-called Dempster's rule. The
purpose of Dempster's rule is to combine two conflicting beliefs into a single belief
that reflects the two conflicting beliefs in a fair and equal way.
Dempster's rule has been criticised mainly because highly conflicting beliefs tend
to produce counterintuitive results. This has been formulated in the form of examples
by Zadeh [2] and Cohen [3] among others. The problem with Dempster's
rule is due to its normalisation which redistributes conflicting belief masses to non-conflicting
beliefs, and thereby tends to eliminate any conflicting characteristics
in the resulting belief mass distribution. An alternative called the non-normalised
Dempster's rule proposed by Smets [4] avoids this particular problem by allocating
all conflicting belief masses to the empty set. Smets explains this by arguing that
1 The work reported in this paper has been funded in part by the Co-operative Research
Centre for Enterprise Distributed Systems Technology (DSTC) through the Australian Federal
Government's CRC Programme (Department of Industry, Science & Resources).
Preprint of article published in Artificial Intelligence Journal, Vol.141/1-2 (2002), p.157-170
the presence of highly conflicting beliefs indicates that some possible event must
have been overlooked (the open world assumption) and therefore is missing in the
frame of discernment. The idea is that conflicting belief masses should be allocated
to this missing (empty) event. Smets has also proposed to interpret the amount of
mass allocated to the empty set as a measure of conflict between separate
beliefs [5].
In this paper we describe an alternative rule for combining conflicting belief functions
called the consensus operator. The consensus operator forms part of subjective
logic which is described in [6]. Our consensus operator is different from
Dempster's rule but has the same purpose; namely of combining possibly conflicting
beliefs. The definition of the consensus operator in [6] and earlier publications
does not cover combination of conflicting dogmatic beliefs, i.e. highly conflicting
beliefs. This paper extends the definition of the consensus operator to also cover
such cases. A comparison between the consensus operator and the two variants of
Dempster's rule is provided in the form of examples.
Subjective logic is a framework for artificial reasoning with uncertain beliefs which
for example can be applied to legal reasoning [7] and authentication in computer
networks [8]. In subjective logic, beliefs must be expressed on binary frames of dis-
cernment, and coarsening is necessary if the original frame of discernment is larger
than binary. Section 2 describes some basic elements from the Dempster-Shafer
theory as well as some new concepts related to coarsening. Section 3 describes the
opinion metric which is the binary belief representation used in subjective logic.
Section 4 describes the consensus operator which operates on opinions and section
5 provides a comparison between the consensus operator and the two variants of
Dempster's rule. A discussion of our results is provided in section 6.
Representing Uncertain Beliefs
The first step in applying the Dempster-Shafer belief model [1] is to define a set of
possible states of a given system, called the frame of discernment denoted by .
The powerset of , denoted by 2 , contains all possible unions of the sets in
including itself. Elementary sets in a frame of discernment will be called
atomic sets because they do not contain subsets. It is assumed that only one atomic
set can be true at any one time. If a set is assumed to be true, then all supersets are
considered true as well.
An observer who believes that one or several sets in the powerset of might be
true can assign belief masses to these sets. Belief mass on an atomic set x 2 2 is
interpreted as the belief that the set in question is true. Belief mass on a non-atomic
set x 2 2 is interpreted as the belief that one of the atomic sets it contains is
true, but that the observer is uncertain about which of them is true. The following
definition is central in the Dempster-Shafer theory.
be a frame of discernment. If with
each subset x 2 2 a number m (x) is associated such
2:
3:
then m is called a belief mass assignment 2 on , or BMA for short. For each
subset x 2 2 , the number m (x) is called the belief mass 3 of x.
A belief mass m (x) expresses the belief assigned to the set x and does not express
any belief in subsets of x in particular. A BMA is called dogmatic if m
(see [5] p.277) because the total amount of belief mass has been committed.
In contrast to belief mass, the belief in a set must be interpreted as an observer's
total belief that a particular set is true. The next definition from the Dempster-Shafer
theory will make it clear that belief in x not only depends on belief mass assigned
to x but also on belief mass assigned to subsets of x.
be a frame of discernment, and let m be
a BMA on . Then the belief function corresponding with m is the function b :
defined by:
Similarly to belief, an observer's disbelief must be interpreted as the total belief
that a set is not true. The following definition is ours.
Definition 3 (Disbelief Function) Let be a frame of discernment, and let m be
a BMA on . Then the disbelief function corresponding with m is the function
defined by:
The disbelief in x is equal to the belief in x, and corresponds to the doubt of x in
Shafer's book. However, we choose to use the term `disbelief' because we feel that
Called basic probability assignment in [1]
3 Called basic probability number in [1]
for example the case when it is certain that a set is false can better be described by
'total disbelief' than by `total doubt'. Our next definition expresses uncertainty regarding
a given set as the sum of belief masses on supersets or on partly overlapping
sets of x.
Definition 4 (Uncertainty Function) Let be a frame of discernment, and let m
be a BMA on . Then the uncertainty function corresponding with m is the function
defined by:
y 6 x
The sum of the belief, disbelief and uncertainty functions is equal to the sum of
the belief masses in a BMA which according to Definition 1 is equal to 1. The
following equality is therefore trivial to prove:
For the purpose of deriving probability expectation values of sets in 2 , we will
show that knowing the relative number of atomic sets is also needed in addition to
belief masses. For any particular set x the atomicity of x is the number of atomic
sets it contains, denoted by jxj. If is a frame of discernment, the atomicity of
is equal to the total number of atomic sets. Similarly, if x; y 2 2 then the overlap
between x and y relative to y can be expressed in terms of atomic sets. Our next
definition captures this idea of relative atomicity:
(Relative Atomicity) Let be a frame of discernment and let x; y 2
2 . Then the relative atomicity of x to y is the function a defined by:
jx \ yj
It can be observed that (x \ and that (y x) )
1). In all other cases the relative atomicity will be a value between 0
and 1. The relative atomicity of an atomic set to its frame of discernment, denoted
by a(x=), can simply be written as a(x). If nothing else is specified, the relative
atomicity of a set then refers to the frame of discernment.
A frame of discernment with a corresponding BMA can be used to determine a
probability expectation value for any given set. The greater the relative atomicity of
a particular set the more the uncertainty function will contribute to the probability
expectation value of that set.
Definition 6 (Probability Expectation) Let be a frame of discernment with BMA
, then the probability expectation function corresponding with m is the function
defined by:
y
Definition 6 is equivalent to the pignistic probability justified by e.g. Smets &
Kennes in [9], and corresponds to the principle of insufficient reason: a belief mass
assigned to the union of n atomic sets is split equally among these n sets.
In order to simplify the representation of uncertain beliefs for particular sets we
will define a focused frame of discernment which will always be binary, i.e. it will
only contain (focus on) one particular set and its complement. The focused frame of
discernment and the corresponding BMA will for the set in focus produce the same
belief, disbelief and uncertainty functions as the original frame of discernment and
BMA. The definitions of the focused frame of discernment and the focused BMA
are given below.
Definition 7 (Focused Frame of Discernment) Let be a frame of discernment
and let x 2 2 . The frame of discernment denoted by e
x containing only x and x,
where x is the complement of x in is then called a focused frame of discernment
with focus on x.
Definition 8 (Focused Belief Mass Assignment) Let be a frame of discernment
with BMA m where b(x), d(x) and u(x) are the belief, disbelief and uncertainty
functions of x in 2 , and let a(x) be the real relative atomicity of x in . Let e
x be
the focused frame of discernment with focus on x. The corresponding focused BMA
and relative atomicity a e x (x) on e
x is defined according to:
a e x
u(x) for u(x) 6= 0
a e x
(2)
When the original frame of discernment contains more than 2 atomic sets, the
relative atomicity of x in the focused frame of discernment e
x is in general different
from 1although e
x per definition contains exactly two sets. The focused relative
atomicity of x in e
x is defined so that the probability expectation value of x is equal
in and e
x , and the expression for a e x (x) can be determined by using Definition 6.
A focused relative atomicity represents the weighted average of relative atomicities
of x to all other sets in function of their uncertainty belief mass. Working with
focused BMAs makes it possible to represent the belief function of any set in 2
using a binary frame of discernment, making the notation very compact.
3 The Opinion Space
For purpose of having a simple and intuitive representation of uncertain beliefs we
will define a 3-dimensional metric called opinion but which will contain a 4th redundant
parameter in order to allow a simple and compact definition of the consensus
operator. It is assumed that all beliefs are held by individuals and the notation
will therefore include belief ownership. Let for example agent A express his or
her beliefs about the truth of set x in some frame of discernment. We will denote
A's belief, disbelief, uncertainty and relative atomicity functions as b A
x , d A
x , u A
x and
a A
x respectively, where the superscript indicates belief ownership and the subscript
indicates the belief target.
Definition 9 (Opinion Metric) Let be a binary frame of discernment containing
sets x and x, and let m be the BMA on held by A where b A
x , d A
x and u A
A's belief, disbelief and uncertainty functions on x in 2 respectively, and let a A
x
represent the relative atomicity of x in . Then A's opinion about x, denoted by
x , is the tuple:
The three coordinates (b; d; u) are dependent through Eq.(1) so that one is redun-
dant. As such they represent nothing more than the traditional Bel (Belief) and Pl
(Plausibility) pair of Shaferian belief theory, where Bel = b and
ever, using (Bel, Pl) instead of (b; d; u) would have produced unnecessary complexity
in the definition of the consensus operator below. Eq.(1) defines a triangle
that can be used to graphically illustrate opinions as shown in Fig.1.
UncertaintyProbability axis
a
E(
x
Projector
Director
Fig. 1. Opinion triangle with ! x as example
As an example the position of the opinion
as a point in the triangle. The horizontal base line between the belief and
disbelief corners is called the probability axis. As shown in the figure, the probability
expectation value 0:7 and the relative atomicity can be
graphically represented as points on the probability axis. The line joining the top
corner of the triangle and the relative atomicity point is called the director. The
projector is parallel to the director and passes through the opinion point ! x . Its intersection
with the probability axis defines the probability expectation value which
otherwise can be computed by the formula of Definition 6. Opinions situated on the
probability axis are called dogmatic opinions, representing traditional probabilities
without uncertainty. The distance between an opinion point and the probability axis
can be interpreted as the degree of uncertainty. Opinions situated in the left or right
corner, i.e. with either are called absolute opinions, corresponding
to TRUE or FALSE states in binary logic.
4 The Consensus Operator
The consensus opinion of two possibly conflicting argument opinions is an opinion
that reflects both argument opinions in a fair and equal way, i.e. when two observers
have beliefs about the truth of x resulting from distinct pieces of evidence about x,
the consensus operator produces a consensus belief that combines the two separate
beliefs into one. If for example a process can produce two outcomes x and x, and
A and B have observed the process over two different time intervals so that they
have formed two independent opinions about the likelihood of x to occur, then the
consensus opinion is the belief about x to occur which a single agent would have
had after having observed the process during both periods.
x
x ) be opinions respectively
held by agents A and B about the same state x, and let
x u A
x .
When u A
0, the relative dogmatism between ! A
x is defined by
so
that
x =u A
x . Let ! A;B
x ) be the opinion such that:
for
1: b A;B
x u A
x )=
2: d A;B
x u A
x )=
3: u A;B
x )=
4: a A;B
x u A
x (a A
x )u A
x
for
1: b A;B
b A
x
2: d A;B
d A
x
3: u A;B
4: a A;B
a A
x
x is called the consensus opinion between ! A
x , representing an
imaginary agent [A; B]'s opinion about x, as if that agent represented both A and
B. By using the symbol '' to designate this operator, we define ! A;B
It is easy to prove that the consensus operator is both commutative and associative
which means that the order in which opinions are combined has no importance. It
can also be shown that the consensus opinion satisfies Eq.(1), i.e. that b A;B
x +d A;B
independence must be assumed, which for example translates
into not allowing an agent's opinion to be counted more than once, and also that
that the argument opinions must be based on distinct pieces of evidence.
Briefly said, the consensus operator is obtained by mapping beta-probability density
functions to the opinion space. It can be shown that posteriori probabilities of
binary events can be represented by the beta-pdf (see e.g. [10] p.298). The beta-
family of density functions is a continuous family of functions indexed by the two
parameters and . The parameters of beta-distributions, which for example can
represent the number of observations of events, can be combined by simple addi-
tion, and thus a way of combining evidence emerges. We refer the reader to [6]
for a detailed description of how the consensus operator can be derived from the
combination of beta-distributions.
The consensus of two totally uncertain opinions results in a new totally uncertain
opinion, although the relative atomicity is not well defined in that case. Two observers
would normally agree on the relative atomicity, and in case of two totally
uncertain opinions we require that they do so, so that the consensus relative atomicity
for example can be defined as a A;B
x .
In [6] it is incorrectly stated that the consensus operator can not be applied to two
dogmatic opinions, i.e. when The definition above rectifies this so that
dogmatic opinions can be combined. This result is obtained by computing the limits
of (b A;B
x ) as u A
using the relative dogmatism between A
and B defined by
x =u A
x . This result makes the consensus operator more
general than Dempster's rule because the latter excludes the combination of totally
conflicting beliefs.
In order to understand the meaning of the relative dogmatism
, it is useful to
consider a process with possible outcomes fx; xg that produces
times as many x
as x. For example when throwing a fair dice and some mechanism makes sure that
A only observes the outcome of 'six' and B only observes the outcome of `one',
'two', `three', 'four' and `five', then A will think the dice only produces 'six' and
will think that the dice never produces 'six'. After infinitely many observations
A and B will have the conflicting dogmatic opinions ! A
6 ) and
respectively. On the average B observes 5 times more
events than A so that B remains 5 times more dogmatic than A as u A
meaning that the relative dogmatism between A and B is
1=5. By combining
their opinions according to the case where inserting
the value of
, the combined opinion about obtaining a 'six' with the dice can be
computed as ! A;B
6 ), which is exactly what one would expect.
In the last example, the relative dogmatism was finite and non-zero, but it is also
possible to imagine extreme relative dogmatisms, e.g.
an infinitesimal (i.e. close to zero). This is related to the concept of epsilon belief
functions which has been applied to default reasoning by Benferhat et al. in [11].
Epsilon belief functions are opinions with b; d; u 2 f0; "; 1 "g, i.e. opinions
situated close to a corner of the triangle in Fig.1. Without going into details it can
be shown that some properties of extreme relative dogmatisms seem suitable for
default reasoning. For example, when the relative dogmatism between A and B
is infinite (
the consensus opinion is equal to A's argument opinion
x ), and when the relative dogmatism is infinitesimal (
the
consensus opinion is equal to B's argument opinion (! A;B
x ). However, with
three agents A, B, and C where
the consensus opinion
x is non-conclusive as long as the relationship between " 1 and " 2 is unknown.
5 Comparing the Consensus Operator with Dempster's Rule
This section describes three examples that compare Dempster's rule, the non-normalised
Dempster's rule and the consensus operator. The definition of Dempster's rule and
the non-normalised rule is given below. In order to distinguish between the consensus
operator and Dempster's rule, the latter will be denoted by 0 .
Definition 11 Let be a frame of discernment, and let m A
be BMAs on
. Then m A
is a function m A
1: m A
(z) K ; and
2: m A
(z)
(z) and K 6= 1 in Dempster's rule, and where
in the non-normalised version.
5.1 Example 1: Dogmatic Conflicting Beliefs
We will start with the well known example that Zadeh [2] used for the purpose
of criticising Dempster's rule. Smets [4] used the same example in defence of the
non-normalised version of Dempster's rule.
Suppose that we have a murder case with three suspects; Peter, Paul and Mary and
two highly conflicting testimonies. Table 1 gives the
witnesses' belief masses in Zadeh's example and the resulting belief masses after
applying Dempster's rule, the non-normalised rule and the consensus operator.
Non-normalised Consensus
rule Dempster's rule operator
Peter
Mary
Table
Comparison of operators in Zadeh's example
Because the frame of discernment in Zadeh's example is ternary, a focused binary
frame of discernment must be derived in order to apply the consensus operator. The
focused opinions are:
The above opinions are all dogmatic, and the case where
must be invoked. Because of the symmetry between W 1 and W 2 we determine the
relative dogmatism between W 1 and W 2 to be
1. The consensus opinion values
and their corresponding probability expectation values can then be computed as:
Peter
Mary
The column for the consensus operator in Table 1 is obtained by taking the 'belief'
coordinate from the consensus opinions above. Dempster's rule selects the least
suspected by both witnesses as the guilty. The non-normalised version acquits all
the suspects and indicates that the guilty has to be someone else. This is explained
by Smets [4] with the so-called open world interpretation of the frame of discern-
ment. In [5] Smets also proposed to interpret m(;) (= 0.9999 in this case) as a
measure of the degree of conflict between the argument beliefs.
The consensus operator respects conflicting beliefs by giving the average of beliefs
to Peter and Mary, whereas the non-conflicting beliefs on Paul is kept unal-
tered. This result is consistent with classical estimation theory (see e.g. comments
to Smets [4] p.278 by M.R.B.Clarke) which is based on taking the average of probability
estimates when all estimates have equal weight.
5.2 Example 2: Conflicting Beliefs with Uncertainty
In the following example uncertainty is introduced by allocating some belief to the
set Maryg. Table 2 gives the modified BMAs and the results of
applying the rules.
Non-normalised Consensus
rule Dempster's rule operator
Peter 0.98 0.00 0.490 0.0098 0.492
Mary
Table
Comparison of operators after introducing uncertainty in Zadeh's example
The frame of discernment in this modified example is again a ternary, and a focused
binary frame of discernment must be derived in order to apply the consensus
operator. The focused opinions are given below:
The consensus opinion values and their corresponding probability expectation values
are:
Peter
Mary
The column for the consensus operator in Table 2 is obtained by taking the 'be-
lief' coordinate from the consensus opinions above. When uncertainty is intro-
duced, Dempster's rule corresponds well with intuitive human judgement. The
non-normalised Dempster's rule however still indicates that none of the suspects
are guilty and that new suspects must be found, or alternatively that the degree of
conflict is still high, despite introducing uncertainty.
The consensus operator corresponds well with human judgement and gives almost
the same result as Dempster's rule, but not exactly. Note that the values resulting
from the consensus operator have been rounded off after the third decimal.
The belief masses resulting from Dempster's rule in Table 2 add up to 1. The 'be-
lief' parameters of the consensus opinions resulting from the consensus operator
do not add up to 1 because they are actually taken from 3 different focused frames
of discernment, but the following holds:
Peter
Mary
5.3 Example 3: Harmonious Beliefs
The previous example seemed to indicate that Dempster's rule and the consensus
operator give very similar results in the presence of uncertainty. However, this is
not always the case as illustrated by the following example. Let two
and W 2 have equal beliefs about the truth of x. The agents' BMAs and the results
of applying the rules are give in Table 3.
Non-normalised Consensus
rule Dempster's rule operator
Table
Comparison of operators i.c.o. equal beliefs
The consensus opinion about x and the corresponding probability expectation value
are:
It is difficult to give an intuitive judgement of these results. It can be observed that
Dempster's rule and the non-normalised version produce equal results because the
witnesses' BMAs are non-conflicting. The two variants of Dempster's rule amplify
the combined belief twice as much as the consensus operator and this difference
needs an explanation. The consensus operator produces results that are consistent
with statistical analysis (see [6]) and in the absence of other criteria for intuitive
or formal judgement, this constitutes a strong argument in favour of the consensus
operator.
6 Discussion and Conclusion
In addition to the three belief combination rules analysed here, numerous others
have been presented in the literature, e.g. the rule proposed by Yager [12] that
transfers conflicting belief mass m A
(y) to whenever x \
the rule proposed by Dubois & Prade [13] that transfers conflicting belief mass
(y) to x[y whenever These rules are commutative, but unfortunately
they are not associative, which seems counterintuitive. Assuming that
beliefs from different sources should be treated in the same way, why should the
result depend on the order in which they are combined? After analysing the rules
of Dempster, Smets, Yager, Dubois & Prade as well as simple statistical average,
Murphy [14] rejects the rules of Yager and Dubois & Prade for their lack of as-
sociativity, and concludes that Dempster's rule performs best for its convergence
properties, accompanied by statistical average to warn of possible errors when the
degree of conflict is high. Our consensus operator seems to combine both the desirable
convergence properties of Dempster's rule when the degree of conflict is low,
and the natural average of beliefs when the degree of conflict is high. As mentioned
in Lef-evre et al. [15], Dempster's rule and it's non-normalised version require that
all belief sources are reliable, whereas Yager's and Dubois & Prade's rules require
that at least one of the belief sources is reliable for the result to be meaningful. The
consensus operator does not make any assumption about reliability of the belief
sources, but does of course not escape the 'garbage in, garbage out' principle.
An argument that could be used against our consensus operator, is that it does
not give any indication of possible belief conflict. Indeed, by looking at the result
only, it does not tell whether the original beliefs were in harmony or in conflict,
and it would have been nice if it did. A possible way to incorporate the degree of
conflict is to add an extra 'conflict' parameter. This could for example be the belief
mass assigned to ; in Smets' rule, which in the opinion notation can be defined
as c A;B
x d A
x where c A;B
1]. The consensus opinion with conflict
parameter would then be expressed as ! A;B
x ). The
conflict parameter would only be relevant for combined belief, and not for original
beliefs. A default value could for example indicate original belief, because
a default value could be misunderstood as indicating that a belief comes from
combined harmonious beliefs, even though it is an original belief.
Opinions can be derived by coarsening any frame of discernment and BMA through
the focusing process, where focusing on different states produces different opin-
ions. In this context it is in general not meaningful to relate belief, disbelief and
uncertainty functions from opinions that focus on different states even though the
opinions are derived from the same frame of discernment and belief mass assign-
ment. The only way to relate such opinions is through the probability expectation
value E(! x ) (which can also be written as E(x)), and this leads to interesting re-
sults. The proof of the following theorem can be found in [6].
Theorem 1 (Kolmogorov Axioms) Given a frame of discernment with a BMA
m , the probability expectation function E with domain 2 satisfies:
1: E(x) 0 for all x
2:
3: If x are pairwise disjoint, then E([ j2 j
This shows that probability theory can be built on top of belief theory through the
probability expectation value. As such belief functions should not be interpreted
as probabilities, instead there is a surjective (onto) mapping from the belief space
to the probability space. Belief and possibility functions have been interpreted as
upper and lower probability bounds respectively (see e.g. Halpern & Fagin [16] and
de Cooman & Ayles [17]). Belief functions can be useful for estimating probability
values but not to set bounds, because the probability of a real event can never be
determined with absolute certainty, and neither can upper and lower bounds to it.
Our view is that probability always is a subjective notion, inasmuch as it is a 1-
dimensional belief measure felt by a given person facing a given event. Objective,
physical or real probability is a meaningless notion. This view is shared by e.g. de
Finetti [18]. In the same way, an opinion as defined here, is a 3-dimensional belief
measure felt by a given person facing a given event.
It has also been suggested to interpret belief functions as evidence (see e.g. Fagin
and Halpern [16]). Belief can result from evidence in the form of observing an event
or knowing internal properties of a system, or from more subjective and intangible
experience. Statistical evidence can for example be translated into belief functions,
as described in [6], and other types of evidence can be intuitively translated into
belief functions, but belief and evidence are not the same. We prefer to leave belief
functions as a distinct concept in its own right, and in general not try to interpret
them as anything else.
The opinion metric described here provides a simple and compact notation for beliefs
in the Shaferian belief model. We have presented an alternative to Dempster's
rule which is consistent with probabilistic and statistical analysis, and which seems
more suitable for combining highly conflicting beliefs as well as for combining
harmonious beliefs, than Dempster's rule and its non-normalised version. The fact
that a binary focused frame of discernment must be derived in order to apply the
consensus operator puts no restriction on its applicability. The resulting beliefs for
each event can still be compared and can form the basis for decision making.
--R
A Mathematical Theory of Evidence.
Review of Shafer's A Mathematical Theory of Evidence.
An expert system framework for non-monotonic reasoning about probabilistic assumptions
Belief functions.
The transferable belief model for quantified belief representation.
A Logic for Uncertain Probabilities.
Legal Reasoning with Subjective Logic.
An Algebra for Assessing Trust in Certification Chains.
The transferable belief model.
Statistical Inference.
Belief Functions and Default Reasoning.
On the Dempster-Shafer framework and new combination rules
Representation and combination of uncertainty with belief functions and possibility measures.
Combining belief functions when evidence conflicts.
A generic framework for resolving the conflict in the combination of belief structures.
Two views of belief: Belief as generalised probability and belief as evidence.
Supremum preserving upper probabilities.
The value of studying subjective evaluations of probability.
--TR
On the Dempster-Shafer framework and new combination rules
Two views of belief
The transferable belief model
Supremum preserving upper probabilities
Combining belief functions when evidence conflicts
Belief functions and default reasoning
A logic for uncertain probabilities
--CTR
Audun Jsang , Daniel Bradley , Svein J. Knapskog, Belief-based risk analysis, Proceedings of the second workshop on Australasian information security, Data Mining and Web Intelligence, and Software Internationalisation, p.63-68, January 01, 2004, Dunedin, New Zealand
Audun Jsang, Probabilistic logic under uncertainty, Proceedings of the thirteenth Australasian symposium on Theory of computing, p.101-110, January 30-February 02, 2007, Ballarat, Victoria, Australia
Audun Jsang , Ross Hayward , Simon Pope, Trust network analysis with subjective logic, Proceedings of the 29th Australasian Computer Science Conference, p.85-94, January 16-19, 2006, Hobart, Australia
Weiru Liu, Analyzing the degree of conflict among belief functions, Artificial Intelligence, v.170 n.11, p.909-924, August 2006
Philippe Smets, Analyzing the combination of conflicting belief functions, Information Fusion, v.8 n.4, p.387-412, October, 2007 | dempster's rule;belief;conflict;subjective logic;consensus operator |
604172 | Coherence in finite argument systems. | Argument Systems provide a rich abstraction within which divers concepts of reasoning, acceptability and defeasibility of arguments, etc., may be studied using a unified framework. Two important concepts of the acceptability of an argument p in such systems are credulous acceptance to capture the notion that p can be 'believed'; and sceptical acceptance capturing the idea that if anything is believed, then p must be. One important aspect affecting the computational complexity of these problems concerns whether the admissibility of an argument is defined with respect to 'preferred' or 'stable' semantics. One benefit of so-called 'coherent' argument systems being that the preferred extensions coincide with stable extensions. In this note we consider complexity-theoretic issues regarding deciding if finitely presented argument systems modelled as directed graphs are coherent. Our main result shows that the related decision problem is (p)2 -complete and is obtained solely via the graph-theoretic representation of an argument system, thus independent of the specific logic underpinning the reasoning theory. | Introduction
Since they were introduced by Dung [8], Argument Systems have provided a fruitful
mechanism for studying reasoning in defeasible contexts. They have proved
useful both to theorists who can use them as an abstract framework for the study
and comparison of non-monotonic logics, e.g. [2,5,6], and for those who wish to
explore more concrete contexts where defeasibility is central. In the study of reasoning
in law, for example, they have been used to examine the resolution of conflicting
norms, e.g. [12], especially where this is studied through the mechanism of
Corresponding author.
Email address: ped@csc.liv.ac.uk (Paul E. Dunne).
Preprint submitted to Elsevier Science 13 May 2002
a dispute between two parties, e.g. [11]. The basic definition below is derived from
that given in [8].
argument system is a Ai, in which X is a set of
arguments and A X X is the attack relationship for H. Unless otherwise
stated, X is assumed to be finite, and A comprises a set of ordered pairs of distinct
arguments. A pair hx; yi 2 A is referred to as 'x attacks (or is an attacker of y' or
'y is attacked by x'.
For R, S subsets of arguments in the system H(hX ; Ai), we say that
a) s 2 S is attacked by R if there is some r 2 R such that hr; si 2 A.
acceptable with respect to S if for every y 2 X that attacks x there is
some z 2 S that attacks y.
c) S is conflict-free if no argument in S is attacked by any other argument in S.
d) A conflict-free set S is admissible if every argument in S is acceptable with
respect to S.
e) S is a preferred extension if it is a maximal (with respect to ) admissible set.
f) S is a stable extension if S is conflict free and every argument y 62 S is attacked
by S.
g) H is coherent if every preferred extension in H is also a stable extension.
An argument x is credulously accepted if there is some preferred extension containing
it; x is sceptically accepted if it is a member of every preferred extension.
The graph-theoretic representation employed by finite argument systems, naturally
suggests a unifying formalism in which to consider various decision problems. To
place our main results in a more general context we start from the basis of the
decision problems described by Table 1 in is an argument system
as in Defn. 1; x an argument in X ; and S a subset of arguments in X .
Polynomial-time decision algorithms for problems (1) and (2) are fairly obvious.
The results regarding problems (3-7) are discussed below. In this article we are
primarily concerned with the result stated in the final line of Table 1: our proof of
this yields (8) as an easy Corollary.
Before proceeding with this, it is useful to discuss important related work of Dimopoulos
and Torres [7], in which various semantic properties of the Logic Programming
paradigm are interpreted with respect to a (directed) graph translation
of reduced negative logic programs: graph vertices are associated with rules and
the concept of 'attack' modelled by the presence of edges hr; si whenever there is
a non-empty intersection between the set of literals defining the head of r and the
negated set of literals in the body of s, i.e. if z 2 body(s) then :z is in this negated
set. Although [7] does not employ the terminology - in terms of credulous accep-
tance, admissible sets, etc - from [8] used in the present article it is clear that similar
forms are being considered: the structures referred to as 'semi-kernel', 'maximal
Problem Decision Question Complexity
3 PREF-EXT(H; S) Is S a preferred extension? CO-NP-complete.
stable extension? NP-complete.
5 CA(H; x) Is x in some preferred extension? NP-complete
6 IN-STAB(H; x) Is x in some stable extension? NP-complete
7 ALL-STAB(H; x) Is x in every stable extension? CO-NP-complete.
8 SA(H; x) Is x in every preferred extension? (p)
9 COHERENT(H) Is H coherent? (p)
Table
Decision Problems in Finite Argument Systems and their Complexity
semi-kernel' and `kernel' in [7] corresponding to 'admissible set', 'preferred exten-
sion' and `stable extension' respectively. The complexity results for problems (3-6)
if not immediate from [7, Thm 5.1, Lemma 5.2, Prop. 5.3] are certainly implied by
these. In this context, it is worth drawing attention to some significant points regarding
[7, Thm. 5.1] which, translated into the terminology of the present article
states:
The problem of deciding whether an argument system H(X ; A) has a non-empty
preferred extension is NP-complete.
First, this implies the complexity classification for PREF-EXT stated, even when the
subset S forming part of an instance is the empty set.
A second point, also relevant to our proof of (9) concerns the transformation used:
present a translation of propositional formulae in 3-CNF (this easily generalises
for arbitrary CNF formulae) into a finite argument system H . It is not
difficult, however, given to define CNF-formulae H whose satisfiability
properties are dependent on the presence of particular structures within H, e.g. stable
extensions, admissible subsets containing specific arguments, etc. We thus have
a mechanism for transforming a given H into an 'equivalent' system F the point
being that F may provide a 'better' basis for graph-theoretic analyses of structures
within H.
Our final observation, concerns problem (7): although the given complexity classification
is neither explicitly stated in nor directly implied by the results of [7], that
ALL-STAB is CO-NP-complete can be shown using some minor 're-wiring' of the
argument graph G constructed from an instance of 3-SAT. 1
The concept of coherence was formulated by [8, Defn. 31(1), p. 332], to describe
those argument systems whose stable and preferred extensions coincide. One significant
benefit of coherence as a property has been established in recent work of
Vreeswijk and Prakken[13] with respect to proof mechanisms for establishing sceptical
acceptance: problem (8) of Table 1. In [13] a sound and complete reasoning
method for credulous acceptance - using a dialogue game approach - is presented.
This approach, as the authors observe, provides a sound and complete mechanism
for sceptical acceptance in precisely those argument systems that are coherent.
Thus a major advantage of coherent argument systems is that proofs of sceptical acceptance
are (potentially) rather more readily demonstrated in coherent systems via
devices such as those of [13]. The complexity of sceptical acceptance is considered
(in the context of membership in preferred extensions) for various non-monotonic
Logics by [5], where completeness results at the third-level of the polynomial-time
hierarchy are demonstrated. Although [5] argue that their complexity results 'dis-
credit sceptical reasoning as . "unnecessarily" complex', it might be argued that
within finite systems where coherence is 'promised' this view may be unduly pes-
simistic. Notwithstanding our main result that testing coherence is extremely hard,
there is an efficiently testable property that can be used to guarantee coherence.
Some further discussion of this is presented in Section 3.
In the next section we present the main technical contribution of this article, that
COHERENT is (p)
2 -complete: the complexity class (p)
comprising those problems
decidable by CO-NP computations given (unit cost) access to an NP oracle. Alterna-
tively, (p)
2 can be viewed as the class of languages, L, membership in which is certified
by a (deterministic) polynomial-time testable ternary relation R L WXY
such that, for some polynomial bound p(jwj) in the number of bits encoding w,
Our result in Theorem 2 provides some further indications that decision questions
concerning preferred extensions are (under the usual complexity-theoretic assump-
tions) likely to be harder than the analogous questions concerning stable exten-
sions: line (8) of Table 1 is an easy Corollary of our main theorem. Similar conclusions
had earlier been drawn in [5,6], where the complexity of reasoning problems
in a variety of non-monotonic Logics is considered under both preferred and stable
semantics. This earlier work establishes a close link between the complexity of the
reasoning problem and that of the derivability problem for the associated logic. One
feature of our proof is that the result is established purely through a graph-theoretic
interpretation of argument, similar in spirit, to the approach adopted in [7]: thus,
This involves removing all except the edge hAux; Ai for edges hA; xi or hx; Ai: then
the differing complexity levels may be interpreted in purely graph-theoretic terms,
independently of the Logic that the graph structure is defined from.
In Section 3 we discuss some consequences of our main theorem in particular with
respect to its implications for designing dialogue game style mechanisms for Sceptical
Reasoning. Conclusions are presented in Section 4.
2 Complexity of Deciding Coherence
Theorem 2 COHERENT is (p)
In order to clarify the proof structure we establish it via a series of technical lem-
mata. The bulk of these are concerned with establishing (p)
-hardness, i.e with
reducing a known (p)
-complete problem to COHERENT.
We begin with the, comparatively easy, proof that COHERENT(H) is in (p)
.
.
Proof: Given an instance, H(X ; A) of COHERENT, it suffices to observe that,
i.e. H is coherent if and only if for each subset S of X : either S is not a preferred
extension or S is a stable extension. Since, :PREF-EXT(H; S) is in NP, i.e. (p)
1 and
STAB-EXT(H; S) in P, we have COHERENT in (p)
2 as required. 2
The decision problem we use as the basis for our reduction is QSAT 2 . An instance of
QSAT 2 is a well-formed propositional formula, (X; Y), defined over disjoint sets
of propositional variables,
loss of generality we may assume using only the Boolean
operations ^, _, and :; and negation is only applied to variables in X [ Y . An
instance, (X; Y) of QSAT 2 is accepted if and only if 8 X 9 Y
no matter how the variables in X are instantiated ( X ) there is some instantiation
Y ) of Y such that h X ; Y i satisfies . That QSAT 2 is (p)
-complete was shown in
[14].
We start by presenting some technical definitions. The first of these describes a
standard presentation of propositional formulae as directed rooted trees that has
often been widely used in applications of Boolean formulae, see e.g. [9, Chapter 4]
Definition 4 Let (Z) be a well-formed propositional formula (wff) over the vari-
AND
AND
Fig. 1. T (z
ables using the operations f^; _; :g with negation applied only
to variables of . The tree representation of (denoted T ) is a rooted directed tree
with root vertex denoted (T ) and inductively defined by the following rules.
a) If single literal z or :z - then T consists of a single vertex
labelled w.
is formed from the k tree
representations hT i
i by directing edges from each (T i ) into a new root
vertex labelled ^.
c) If
is formed from the k tree
representations hT i
i by directing edges from each (T i ) into a new root
vertex labelled _.
In what follows we use the term node of T to refer to an arbitrary tree vertex, i.e.
a leaf or internal vertex.
In the tree representation of , each leaf vertex is labelled with some literal w,
(several leaves may be labelled with the same literal), and each internal vertex with
an operation in f^; _g. We shall subsequently refer to the internal vertices of T as
the gates of the tree. Without loss of generality we may assume that the successor
of any ^-gate (tree vertex labelled ^) is an _-gate (tree vertex labelled _) and vice-
versa. The size of (Z) is the number of gates in its tree representation T . For
formulae of size m we denote by hg the gates in T with g m always
taken as the root (T ) of the tree. Finally for any edge hh; gi in T we refer to the
node h as an input of the gate g. 2
Definition 5 For a formula, (Z), an instantiation of its variables is a mapping, :
associating a truth value or unassigned status () with each
variable z i . We use i to denote (z i ). An instantiation is total if every variable is
assigned a value in ftrue; falseg and partial otherwise. We define a partial ordering
We note that since any gate may be assumed to have at most n distinct literals among its
inputs, our measure of formula size as 'number of gates' is polynomially equivalent to the
more usual measure of size as 'number of literal occurrences', i.e. leaf nodes.
over instantiations
and - to Z by writing
< - if: for each i with
and there is at least one i, for which
Given (Z) any instantiation induces a mapping from the
nodes defining T onto values in ftrue; false; g. Assuming the natural generalisations
of ^ and _ to the domain htrue; false; i, 3 we define for h a node in T , its
value (h; ) under the instantiation of Z as
if h is a leaf node labelled z i or :z i and
is a leaf node labelled z i and i 6=
is a leaf node labelled :z i and i 6=
is an _-gate with inputs hh
is an ^-gate with inputs hh
where is clear from the context, we write (h) for (h; ).
With this concept of the value induced at a node of T via an instantiation , we
can define a partition of the literals and gates in T that is used extensively in our
later analysis.
The value partition Val() of T comprises 3 sets hTrue(); False(); Open()i.
T1) The subset True() consists of literals and gates, h, for which
T2) The subset False() consists of literals and gates, h, for which
T3) The subset Open() consists of literals and gates, h, for which
The following properties of this partition can be easily proved:
Fact 6
a)
b) If
< -, then True(
For example in Fig. 1 under the partial instantiation
with all other variables unassigned, we have:
g.
At the heart of our proof that QSAT 2 is polynomially reducible to COHERENT is
a translation from the tree representation T of a formula (X; Y) to an argument
system H (X ; A ). It will be useful to proceed by presenting a preliminary trans-
are true or at least one x j is false; _ k
are
false or at least one is true.
z1 z2 z3 -z4
z4
Fig. 2. The Argument System R from the formula of Fig. 1
lation that, although not in the final form that will be used in the reduction, will
have a number of properties that will be important in deriving our result.
Definition 7 Let (Z) be a propositional formula with tree representation T having
size m. The Argument Representation of , is the argument system R
defined as follows. R contains the following arguments
literal arguments fz ng.
X2 For each gate g k of T , an argument :g k (if g k is an _-gate) or an argument
k is an ^-gate). If g m , i.e the root of T , happens to be an _-gate,
then an additional argument g m is included. We subsequently denote this set
of arguments by G .
The attack relationship - A - over X contains:
ng
m is an _-gate in T ,
A3 If g k is an ^-gate with inputs fh rg.
k is an _-gate with inputs fh rg.
Fig. 2 shows the result of this translation when it is applied to the tree representation
of the formula in Fig. 1.
The arguments defining R fall into one of two sets: 2n arguments corresponding
to the 2n distinct literals over Z; and m (or m+ 1) 'gate' arguments. The key idea is
the following: any instantiation of the propositional variables Z of , induces the
partition Val() of literals and gates in T . In the argument system R the attack
relationship for gate arguments, reflects the conditions under which the corresponding
argument is admissible (with respect to the subset of literal arguments marked
out by ). For example, suppose g 1 is an _-gate with literals z 1 , :z 2 , z 3 as its in-
puts. In the simulating argument system, g 1 is represented by an argument labelled
:g 1 which is attacked by the (arguments labelled with) literals z 1 , :z 2 , and z 3 : the
interpretation being that "the assertion 'g 1 is false' is attacked by instantiations in
which z 1 or :z 2 or z 3 are true". Similarly were g 1 an ^-gate it would appear in
R as an argument labelled g 1 which was attacked by literals :z 1 , z 2 , and :z 3 : the
interpretation now being that "the assertion 'g 1 is true' is attacked by instantiations
in which z 1 or :z 2 or z 3 are false". With this viewpoint, any instantiation will
induce a selection of the literal arguments and a selection of the gate arguments
(i.e. those for which no attacking argument has been included).
Suppose is an instantiation of Z. The key idea is to map the partition of the
tree representation T as Val() onto an analogous partition of the literal and gate
arguments in R . Given this partition comprises 3 sets, hIn(); Out(); Poss()i
defined by:
An argument p is in the subset In() of X if:
(p is the argument z i , or (p is the argument :z i ,
or is in False())
or is in True())
An argument p is in the subset Out() of X if:
(p is the argument z i , or (p is the argument :z i ,
or is in True())
or is in False())
An argument p is in the subset Poss() of X if:
With the formulation of the argument system R (X ; A ) from the formula (Z)
and the definition of the partition hIn(); Out(); Poss()i via the value partition
of T we are now ready to embark on the sequence of technical lemmata
which will culminate in the proof of Theorem 2.
Our proof strategy is as follows. We proceed by characterising the set of preferred
extensions of R showing - in Lemma 8 through Lemma 11 - that these consist
of exactly the subsets defined by In(
Z is a total instantiation of Z. In
Lemma 12 we deduce that these are all stable extensions and thus that R is itself
coherent. In the remaining lemmata, we consider the argument systems arising by
transforming instances (X; Y) of QSAT 2 . In these, however, we add to the basic
system defined by R (which will have 4n literal arguments and m (or
gate arguments) an additional set of 3 control arguments one of which attacks all
of the Y-literal arguments: we denote this augmented system by H As
will be seen in Lemma 15, it follows easily from Lemma 10 that for any h X ; Y i
satisfying (X; Y) the subset In( X ; Y ) is a stable extension of both R and H .
The crucial property provided by the additional control arguments in H is proved
in Lemma 16: if for X there is no Y for which h X ; Y i satisfies (X; Y) then
the subset In( X ) (defined from R ) is a preferred but not stable extension of H ,
denotes the set In( which every y i is unassigned.
The reason for introducing the control arguments in moving from R to H is that
not a preferred extension of R : although it is admissible, it could be
extended by adding, for example, Y-literal arguments. The design of H will be
such that unless the gate argument g m can be used in an admissible extension of
already maximal in H and not a stable extension since the
control arguments are not attacked. Finally, in Lemma 17, it is demonstrated that
the only preferred extensions of H are those arising as a result of Lemma 15 and
Lemma 16. Theorem 2 will follow easily from Lemma 17, since the argument g m
- corresponding to the root node (T ) of the instance (X; Y) - must necessarily
belong to any stable extension in H : hence H is coherent if and only if for each
instantiation X there is an instantiation Y such that h X ; Y i satisfies (X; Y), i.e.
for which in the system R and thence in the corresponding stable
extension of H .
We employ the following notational conventions: X , Y , (and
instantiations of X, Y , (and Z); for an argument p in X , g p (resp. h p ) denotes
the corresponding gate (resp. node) in T , hence if g p is an _-gate, then p is the
argument labelled :g denotes the set of all preferred (resp.
stable) extensions in the argument system M , where M is one of R or H .
Z In(
Z ) is conflict-free.
Proof: Let
Z be an instantiation of Z and consider the subset In(
Z ) of X in R .
Suppose that there are arguments p and q in In(
Z ) for which hp; qi 2 A . It cannot
be the case that h literal over z i , since exactly one
of fz is in True(
exactly one of the corresponding literal arguments
is in In(
Z ). Thus q must be a gate argument. Suppose g q is an _-gate: q 2 In(
only if g q 2 False(
Z ) and therefore h p , which (since hp; qi 2 A ) must be an input
of g q is also in False(
Z ). This leads to a contradiction: if h p is a gate then it is an
^-gate, so
is a literal u i , then h p 2 False(
would mean that :u i 2 True(
Z ) and hence u i 62 In(
Z ). The remaining possibility
is that g q is an ^-gate: q 2 In(
If h p is a gate it must be an input of g q and an _-gate: h p 2 True(
Finally if the input h p is a literal u i in T then in R the literal :u i
attacks q: u
Z ). We deduce that In(
must be
conflict-free. 2
Lemma 9 8
Z In(
Z ) is admissible.
Proof: From Lemma 8, In(
Z ) is conflict-free, so it suffices to show for all arguments
Z ) that attack some q 2 In(
Z ) there is an argument r 2 In(
that attacks p. Let p, q be such that p 62 In(
Z ) and hp; qi 2 A . If
q is a literal argument, u i say, then p must be the literal argument :u i and choosing
provides a counter-attacker to p. Suppose q is a gate argument. One of
the inputs to g q must be the node h p . If g q is an _-gate then g q 2 False(
Z ) and
Z ). If h p is a literal u i then the literal argument
attacks is an ^-gate then h p 2 False(
Z ) implies there is some input h r
to h p with h r 2 False(
Z ), so that r = :h r is in In(
r is an _-gate
or literal) and r attacks p. Similarly, if g q is an ^-gate then g q 2 True(
Z ) and
Z ). If h p is a literal u i then the attacking argument (on q in R ) is the
literal
Z ) provides a counter-attack on p. If h p
is an _-gate then h p 2 True(
Z ) indicates that some input h r of h p is in True(
so that r = h r is in In(
Z ) and r attacks p. No more cases remain thus In(
Z ) is
admissible. 2
Z In(
Proof: From Lemma 8, 9 and the fact that every argument in X is allocated
to either In(
Z ) or Out(
Z ) by
Z , cf. Fact 6(a), it suffices to show that for any
argument
Z ) there is some q 2 In(
Z ) such that p and q conflict. Certainly
this is the case for literal arguments, u 2 Out(
Z ) since the complementary literal
:u is in In(
Z ) is a gate argument. If g p is an _-gate then
Z ) and hence some input h q of g p must be in
Z ). The argument q corresponding to this input node will therefore be in
In(
Z ). If g p is an ^-gate then p 2 Out(
Z ) and some input
h q of g p must be in False(
Z ). The argument :h q will be in In(
Z ) and conflicts
with p. 2
Lemma
Proof: First observe that all S 2 PE R must contain exactly n literal arguments:
exactly one representative from fz for each i. Let us call such a subset of the
literal arguments a representative set and suppose that U is any representative set
with S U any preferred extension containing U. We will show that there is exactly
one possible choice for S U and that this is S
(U) is the instantiation
of Z by: z Consider the following
procedure that takes as input a representative set U and returns a subset S U 2 PE R
with U S U .
g.
We can note three properties of this procedure. Firstly, it always halts: once the
literal arguments in the representative set U and their complements have been removed
from T U (in Steps 2 and 4), the directed graph-structure remaining is acyclic
and thus has at least one argument that is attacked by no others. Thus each iteration
of the main loop removes at least one argument from T U which eventually becomes
empty. Secondly, the set S U is in PE R : the initial set (U) is admissible and the
arguments removed from T U at each iteration are those that have just been added
to S U (Step 2) as well as those attacked by such arguments (Step 4); in addition
the arguments added to S U at each stage are those that have had counter-attacks
to all potential attackers already placed in S U . Finally for any given U the subset
returned by this procedure is uniquely defined. In summary, every S 2 PE R is
defined through exactly one representative set, U S , and every representative set U
develops to a unique S U 2 PE R . Each representative set, U, however, has the form
In(
hence the unique preferred extension, S U ,
consistent with U is In(
Lemma 12 The argument system R (X ; A ) is coherent.
Proof: The procedure of Lemma 11 only excludes an argument, q, from the set S U
under construction if q is attacked by some argument p 2 S U . Thus, S U is always
a stable extension, and since Lemma 11 accounts for all S 2 PE R , we deduce that
R is coherent. 2
Although our preceding three results characterise R as coherent, this, in itself,
does not allow R be used directly as the transformation for instances (X; Y) of
. The overall aim is to construct an argument system from (X; Y) which is
coherent if and only if (X; Y) is a positive instance of QSAT 2 . The problem with
R is that, even though (X; Y) may be a positive instance, there could be instan-
which fail to satisfy (X; Y) but give rise to a stable extension
In order to deal with this
difficulty, we need to augment R (giving a system H ) in such a way that the
admissible set In( X ) is a preferred (but not stable) extension (in H ) only if no
instantiation Y allows h X ; Y i to satisfy (X; Y). Thus, in our augmented system,
we will have exactly two mutually exclusive possibilities for each total instantiation
X of X: either there is no Y for which true, in which event the set
produce a non-stable preferred extension of H ; or there is an appropriate
Y , in which case In( X ; Y ) (of which In( X ) is a proper subset, cf. Fact 6(b))
Fig. 3. An Augmented Argument Representation H
will yield a stable extension in H .
Definition 13 For (X; Y) an instance of QSAT 2 , the Augmented Argument Representation
of - denoted H
X are the arguments arising in the Argument Representation of (X; Y) - R -
as given in Definition 7 and C are 3 new arguments called the
control arguments. The attack relationship B contains all of the attacks A in the
system R together with new attacks,
ng
Using the relabelling of variables in our example formula - Figs. 1,2 - as hx 1
the Augmented Argument Representation for the system
in Fig. 2 is shown in Fig. 3
Lemma 14 If S 2 PE H then C i 62 S for any of fC g. If S 2 SE H then
Proof: Suppose S 2 PE H . If g m 2 S then each of the control arguments is attacked
by g m and so cannot be in S. If g m 62 S then C 3 62 S since the only counter-attack
to C 2 is the argument C 1 which conflicts with C 3 . By similar reasoning it follows
that C 2 62 S and C 1 62 S. For the second part of the lemma, given S 2 SE H , since
6 S, there must be some attacker of these in S. The only choice for
this attacker is g m . 2
Lemma
Proof: From Lemma 10 and 12, the subset In( X ; Y ) is in SE R . Furthermore,
since that the gate argument g m of R is in In(
For the augmented system, H , the arguments in In(
attacks on Y-literal arguments by the control argument C 1 are attacked in turn by
the gate argument g m . In addition, using the arguments of Lemma 10 no arguments
in Out( X ; Y ) can be added to the set In( conflict. Thus
Lemma is such that no instantiation Y of Y , leads to h
Proof: The subset In( X ) of R can be shown to be admissible (in both R and
H ) by an argument similar to that of Lemma 9. 4 Suppose for all Y , we have
false, and consider any subset S of W in H for which In( X ) S.
We show that S 62 PE H . Assume the contrary holds. From Lemma 14 no control
argument is in S. If g m 2 S then S must contain a representative set, V Y say, of the Y -
literal arguments matching some instantiation Y . From the argument used to prove
Lemma is the only preferred extension in R consistent with the
literal choices indicated by X and Y , and thus would be the only such possibility
for H . Now we obtain a contradiction since g m 62 In(
and so cannot be used in H to counter the attack by C 1 on the representative set V Y .
Thus we can assume that g m 62 S. From this it follows that no Y-literal argument
is in S (as g m is the only attacker of the control argument C 1 which attacks Y -
literals). Now consider the gates in T topologically sorted, i.e. assigned a number
such that all of the inputs for a gate numbered (g) are from literals
or gates h with (h) < (g). Let q be an argument such that g q is the first gate in
this topological ordering for which q 2 S=In( X ). We must have
would already be excluded from any admissible
set having In( X ) as a subset. Consider the set of arguments in W that attack
q. At least one attacker, p, must be a node h p in T for which h p 2 Open( X ).
Now our proof is completed: S has no available counter-attack to the attack by
on q since such could only arise from a Y-literal argument (all of which have
been excluded) or from another gate argument r with g r 2 Open( X ), however,
contradicts the choice of q. Fig. 4 illustrates the
possibilities. We conclude that the subset In( X ) of W is in PE H whenever there
is no Y with which true, and since the control arguments are not
attacked,
4 A minor addition is required in that since X is a partial instantiation (of hX; Yi) it has
to be shown that all arguments p that attack arguments q 2 In( X ) belong to the subset
are not in Poss( X ). With the generalisation of ^ and _ to allow unassigned
values, it is not difficult to show that if p 2 Poss( X ) then any argument q attacked by p in
R cannot belong to In( X ).
AND
q= -g in Open()
q in S()
Inputs in False()
p=h in Open()
p not in S()
Inputs in True()
r in Open()
k(r)<k(g) or
Inputs in False()
Inputs in True()
AND
q=g in Open()
q in S()
p=-h in Open()
p not in S()
r in Open()
or r in Y:
r not in S()
r in Y:
r not in S()
(a) (b)
Fig. 4. Final cases in the proof of Lemma 16: q 2 Poss( X ) is not admissible
Lemma 17 If S 2 SE H then
Proof: Consider any S 2 PE H . It is certainly the case that S has as a subset
some representative set, V X from the X-literal arguments. Suppose we modify the
procedure described in the proof of Lemma 11, to one which takes as input a representative
set V of the X-literals and returns a subset S V of the arguments W of
H in the following way:
g.
The set S V is an admissible subset of W that contains only X-literal arguments
and a (possibly empty) subset G of the gate arguments G . Furthermore, given V ,
there is a unique S V returned by this procedure. It follows that for any S 2 PE H ,
for the representative set V associated with S. This set, V ,
matches the literal arguments selected by some instantiation (V) of X, and so as
in the proof of Lemma 11, we can deduce that S This suffices to
complete the proof: we have established that every set S in PE H contains a subset
instantiation X : from Lemma 16, In( X ) is not maximal if and
only if
The proof of our main theorem is now easy to construct.
Proof: (of Theorem 2) It has already been shown that COHERENT 2 (p)
2 in
Lemma 3. To complete the proof we need only show that (X; Y) is a positive
instance of QSAT 2 if and only if H is coherent.
First suppose that for all instantiations X there is some instantiation Y for which
holds. From Lemma 15 and Lemma 17 it follows that all preferred extensions
in H are of the form In( X ; Y ), and these are all stable extensions, hence
H is coherent. Similarly, suppose that H is coherent. Let X be any total instantiation
of X. Suppose, by way of contradiction, that for all Y ,
From Lemma 16, In( X ) is a preferred extension in this case, and hence (since H
was assumed to be coherent) a stable extension. From Lemma 14 this implies that
could only happen if i.e. the value of
is determined in this case, independently of the instantiation of Y , contradicting the
assumption that ( X ; Y ) was false for every choice of Y . Thus we deduce that
(X; Y) is a positive instance of QSAT 2 if and only if H is coherent so completing
the proof that COHERENT is (p)
An easy Corollary of the reduction in Theorem 2 is
Corollary
Proof: That SA 2 (p)
2 follows from the fact that x is sceptically accepted in
only if: for every subset S of X either S is not a preferred extension
or x is in S. To see that SA is (p)
2 -hard, we need only observe that in order for H to
be coherent, the gate argument g m must occur in in every preferred extension of H
in the reduction of Theorem 2 Thus, H is coherent if and only if g m is sceptically
accepted in H . 2
3 Consequences of Theorem 2 and Open Questions
A number of authors have recently considered mechanisms for establishing credulous
acceptance of an argument p in a finitely presented system H(X ;
dialogue games. The protocol for such games assumes two players - the Defender,
(D) and Challenger, (C) - and prescribe a move (or locution) repertoire together
with the criteria governing the application of moves and concepts of 'winning' or
'losing'. The typical scenario is that following D asserting p the players take alternate
turns presenting counter-arguments (consistent with the structure of H) to
the argument asserted by their opponent in the previous move. A player loses when
no legal move (within the game protocol) is available. An important example of
such a game is the TPI-dispute formalism of [13] which provides a sound and complete
basis for credulous argumentation. An abstract framework for describing such
games was presented in [11], and is used in [3] also to define a game-theoretic approach
to Credulous Acceptance. Coherent systems are important with respect to
the game formalism of [13]: TPI-disputes define a sound and complete proof theory
for both Sceptical and Credulous games on coherent argument systems; the Sceptical
Game is not, however, complete in the case of incoherent systems. The sequence
of moves describing a completed Credulous Game (for both [3,13]) can be interpreted
as certificates of admissibility or inadmissibility for the argument disputed.
It may be noted that this view makes apparent a computational difficulty arising in
attempting to define similar 'Sceptical Games' applicable to incoherent systems:
the shortest certificate that CA(H; x) holds, is the size of the smallest admissible
set containing x - it is shown in [10] that there is always a strategy for D that can
achieve this; it is also shown in [10] that TPI-disputes won by C, i.e. certificates
that :CA(H; x), can require exponentially many (in jX moves. 5 If we consider a
sound and complete dialogue game for sceptical reasoning, then the moves of a dispute
won by D constitute a certificate of membership in a (p)
-complete language:
we would expect such certificates 'in general' to have exponential length; similarly,
the moves in a dispute won by C constitute a certificate of membership in a (p)
complete language and again these are 'likely' to be exponentially long. Thus a
further motivation of coherent systems is that sceptical acceptance is 'at worst' CO-
NP-complete: short certificates that an argument is not sceptically accepted always
exist.
The fact that sceptical acceptance is 'easier' to decide for coherent argument sys-
tems, raises the question of whether there are efficiently testable properties that can
be exploited in establishing coherence. The following is not difficult to prove:
Fact 19 If H(X ; A) is not coherent then it contains a (simple) directed cycle of
odd length.
Thus an absence of odd cycles (a property which can be efficiently decided) ensures
that the system is coherent. An open issue concerns coherence in random systems.
One consequence of [4] is that random argument systems of n arguments in which
each attack occurs (independently) with probability p, almost surely have a stable
extension when p is a fixed probability in the range 0 p 1. Whether a similar
result can be proven for coherence is open.
As a final point, we observe that the interaction between graph-theoretic models of
argument systems and propositional formulae may well provide a fruitful source
5 Since these are certificates of membership in a CO-NP-complete language, this is unsur-
prising: [10] relates dispute lengths for such instances to the length of validity proofs in the
CUT-free Gentzen calculus.
of further techniques. We noted earlier that [7] provides a translation from CNF-
formulae, into an argument system H ; our constructions above define similar
translations for arbitrary propositional formulae. We can equally, however, consider
translations in the reverse direction, e.g. given H(X ; it is not difficult to see that
the
fz:hz;xi2Ag z) is satisfiable if
and only H has a stable extension. Similar encodings can be given for many of the
decision problems of Table 1. Translating such forms back to argument systems, in
effect gives an alternative formulation of the original argument system from which
they were generated, and thus these provide mechanisms whereby any system, H
can be translated into another system H dec with properties of concern holding of
H if and only if related properties hold in H dec . Potentially this may permit both
established methodologies from classical propositional logic 6 and graph-theory to
be imported as techniques in argumentation.
In this article the complexity of deciding whether a finitely presented argument
system is coherent has been considered and shown to be (p)
-complete, employing
techniques based entirely around the directed graph representation of an argument
system. An important property of coherent systems is that sound and complete
methods for establishing credulous acceptance adapt readily to provide similar
methods for deciding sceptical acceptance, hence sceptical acceptance in coherent
systems is CO-NP-complete. In contrast, as an easy corollary of our main result
it can be shown that sceptical acceptance is (p)
2 -complete in general. Finally we
have outlined some directions by which the relationship between argument systems,
propositional formulae, and graph-theoretic concepts offers potential for further re-search
--R
reasoning using classical logic.
An abstract
Dialectical proof theories for the credulous preferred semantics of argumentation frameworks.
Preferred arguments are harder to compute than stable extensions.
Finding admissible and preferred arguments can be very hard.
Graph theoretical structures in logic programs and default theories.
On the acceptability of arguments and its fundamental role in nonmonotonic reasoning
The Complexity of Boolean Networks.
Two party immediate response disputes: Properties and efficiency.
Dialectic semantics for argumentation frameworks.
Logical Tools for Modelling Legal Argument.
Credulous and sceptical argument games for preferred semantics.
Complete sets and the polynomial-time hierarchy
--TR
The complexity of Boolean networks
Kernels in random graphs
On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and <italic>n</italic>-person games
reasoning using classical logic
Graph theoretical structures in logic programs and default theories
An abstract, argumentation-theoretic approach to default reasoning
Dialectic semantics for argumentation frameworks
Preferred Arguments are Harder to Compute than Stable Extension
Dialectical Proof Theories for the Credulous Preferred Semantics of Argumentation Frameworks
--CTR
P. M. Dung , R. A. Kowalski , F. Toni, Dialectic proof procedures for assumption-based, admissible argumentation, Artificial Intelligence, v.170 n.2, p.114-159, February 2006
Pietro Baroni , Massimiliano Giacomin , Giovanni Guida, Self-stabilizing defeat status computation: dealing with conflict management in multi-agent systems, Artificial Intelligence, v.165 n.2, p.187-259, July 2005
Paul E. Dunne , T. J. M. Bench-Capon, Two party immediate response disputes: properties and efficiency, Artificial Intelligence, v.149 n.2, p.221-250, October
Trevor J. M. Bench-Capon , Sylvie Doutre , Paul E. Dunne, Audiences in argumentation frameworks, Artificial Intelligence, v.171 n.1, p.42-71, January, 2007 | argument Systems;sceptical reasoning;coherence;credulous reasoning;computational complexity |
604187 | Typed compilation of recursive datatypes. | Standard ML employs an opaque (or generative) semantics of datatypes, in which every datatype declaration produces a new type that is different from any other type, including other identically defined datatypes. A natural way of accounting for this is to consider datatypes to be abstract. When this interpretation is applied to type-preserving compilation, however, it has the unfortunate consequence that datatype constructors cannot be inlined, substantially increasing the run-time cost of constructor invocation compared to a traditional compiler. In this paper we examine two approaches to eliminating function call overhead from datatype constructors. First, we consider a transparent interpretation of datatypes that does away with generativity, altering the semantics of SML; and second, we propose an interpretation of datatype constructors as coercions, which have no run-time effect or cost and faithfully implement SML semantics. | Introduction
The programming language Standard ML (SML) [9] provides a
distinctive mechanism for defining recursive types, known as a
datatype declaration. For example, the following declaration defines
the type of lists of integers:
datatype intlist = Nil
| Cons of int * intlist
This datatype declaration introduces the type intlist and two
constructors: Nil represents the empty list, and Cons combines
an integer and a list to produce a new list. For instance, the expression
Cons (1, Cons (2, Cons (3,Nil))) has type intlist
and corresponds to the list [1, 2,3]. Values of this datatype are deconstructed
by a case analysis that examines a list and determines
whether it was constructed with Nil or with Cons, and in the latter
case, extracting the original integer and list.
An important aspect of SML datatypes is that they are generative.
That is, every datatype declaration defines a type that is distinct
from any other type, including those produced by other, possibly
identical, datatype declarations. The formal Definition of SML [9]
makes this precise by stating that a datatype declaration produces
a new type name, but does not associate that name with a defini-
tion; in this sense, datatypes are similar to abstract types. Harper
and Stone [7] (hereafter, HS) give a type-theoretic interpretation of
SML by exhibiting a translation from SML into a simpler, typed
internal language. This translation is faithful to the Definition of
SML in the sense that, with a few well-known exceptions, it translates
an SML program into a well-typed IL program if and only
if the SML program is well-formed according to the Definition.
Consequently, we consider HS to be a suitable foundation for type-directed
compilation of SML. Furthermore, it seems likely that any
other suitable type-theoretic interpretation (i.e., one that is faithful
to the Definition) will encounter the same issues we explore in our
analysis.
Harper and Stone capture datatype generativity by translating a
datatype declaration as a module containing an abstract type and
functions to construct and deconstruct values of that type; thus in
the setting of the HS interpretation, datatypes are abstract types.
The generativity of datatypes poses some challenges for type-directed
compilation of SML. In particular, although the HS interpretation
is easy to understand and faithful to the Definition of
SML, it is inefficient when implemented na-vely. The problem
is that construction and deconstruction of datatype values require
calls to functions exported by the module defining the datatype;
this is unacceptable given the ubiquity of datatypes in SML code.
Conventional compilers, which disregard type information after an
initial type-checking phase, may dispense with this cost by inlining
those functions; that is, they may replace the function calls with
the actual code of the corresponding functions to eliminate the call
overhead. A type-directed compiler, however, does not have this
option since all optimizations, including inlining, must be type-
preserving. Moving the implementation of a datatype constructor
across the module boundary violates type abstraction and thus results
in ill-typed intermediate code. This will be made more precise
in Section 2.
In this paper, we will discuss two potential ways of handling this
performance problem. We will present these alternatives in the context
of the TILT/ML compiler developed at CMU [11, 14]; they are
relevant, however, not just to TILT, but to understanding the definition
of the language and type-preserving compilation in general.
The first approach is to do away with datatype generativity alto-
gether, replacing the abstract types in the HS interpretation with
concrete ones. We call this approach the transparent interpretation
of datatypes. Clearly, a compiler that does this is not an implementation
of Standard ML, and we will show that, although the
modified language does admit inlining of datatype constructors, it
has some unexpected properties. In particular, it is not the case that
every well-formed SML program is allowed under the transparent
interpretation.
In contrast, the second approach, which we have adopted in the
most recent version of the TILT compiler, offers an efficient way of
implementing datatypes in a typed setting that is consistent with the
In particular, since a value of recursive type is typically
represented at run time in the same way as its unrolling, we can
observe that the mediating functions produced by the HS interpretation
all behave like the identity function at run time. We replace
these functions with special values that are distinguished from ordinary
functions by the introduction of "coercion types". We call this
the coercion interpretation of datatypes, and argue that it allows a
compilation strategy that generates code with a run-time efficiency
comparable to what would be attained if datatype constructors were
inlined.
The paper is structured as follows: Section 2 gives the details of the
HS interpretation of datatypes (which we also refer to as the opaque
interpretation of datatypes) and illustrates the problems with inlin-
ing. Section 3 discusses the transparent interpretation. Section 4
gives the coercion interpretation and discusses its properties. Section
5 gives a performance comparison of the three interpretations.
Section 6 discusses related work and Section 7 concludes.
2 The Opaque Interpretation of Datatypes
In this section, we review the parts of Harper and Stone's interpretation
of SML that are relevant to our discussion of datatypes. In par-
ticular, after defining the notation we use for our internal language,
we will give an example of the HS elaboration of datatypes. We
will refer to this example throughout the paper. We will also review
the way Harper and Stone define the matching of structures against
signatures, and discuss the implications this has for datatypes. This
will be important in Section 3, where we show some differences
between signature matching in SML and signature matching under
our transparent interpretation of datatypes.
Types s,t ::= - | a | d
Recursive Types d
Terms e ::= - | x | roll d (e)
| unroll d (e)
Typing Contexts G ::= e |
Figure
1. Syntax of Iso-recursive Types
where X is a metavariable,
such as a or t
where
Figure
2. Shorthand Definitions
2.1 Notation
Harper and Stone give their interpretation of SML as a translation,
called elaboration, from SML into a typed internal language (IL).
We will not give a complete formal description of the internal language
we use in this paper; instead, we will use ML-like syntax for
examples and employ the standard notation for function, sum and
product types. For a complete discussion of elaboration, including
a thorough treatment of the internal language, we refer the reader
to Harper and Stone [7]. Since we are focusing our attention on
datatypes, recursive types will be of particular importance. We will
therefore give a precise description of the semantics of the form of
recursive types we use.
The syntax for recursive types is given in Figure 1. Recursive types
are separated into their own syntactic subcategory, ranged over by d.
This is mostly a matter of notational convenience, as there are many
times when we wish to make it clear that a particular type is a recursive
one. A recursive type has the form -
and each a j is a type variable that may appear free
in any or all of t 1 , ., t n . Intuitively, this type is the ith in a system
of n mutually recursive types. As such, it is isomorphic to t i
with each a j replaced by the jth component of the recursive bun-
dle. Formally, it is isomorphic to the following somewhat unwieldy
type:
(where, as usual, we denote by t[s 1 , ., s n /a 1 , ., a n ] the simultaneous
capture-avoiding substitution of s 1 , ., s n for a 1 , ., a n in
t). Since we will be writing such types often, we use some notational
conventions to make things clearer; these are shown in Figure
2. Using these shorthands, the above type may be written as
The judgment forms of the static semantics of our internal language
are given in Figure 3, and the rules relevant to recursive types are
given in Figure 4. Note that the only rule that can be used to judge
two recursive types equal requires that the two types in question are
the same (ith) projection from bundles of the same length whose
respective components are all equal. In particular, there is no "un-
Well-formed context.
Well-formed type.
Equivalence of types.
Well-formed term.
Figure
3. Relevant Typing Judgments
Figure
4. Typing Rules for Iso-recursive Types
rolling" rule stating that d # expand(d); type theories in which this
equality holds are said to have equi-recursive types and are significantly
more complex [5]. The recursive types in our theory are iso-
recursive types that are isomorphic, but not equal, to their expan-
sions. The isomorphism is embodied by the roll and unroll operations
at the term level; the former turns a value of type expand(d)
into one of type d, and the latter is its inverse.
2.2 Elaborating Datatype Declarations
The HS interpretation of SML includes a full account of datatypes,
including generativity. The main idea is to encode datatypes as recursive
sum types but hide this implementation behind an opaque
signature. A datatype declaration therefore elaborates to a structure
that exports a number of abstract types and functions that construct
and deconstruct values of those types. For example, consider the
following pair of mutually recursive datatypes, representing expressions
and declarations in the abstract syntax of a toy language:
datatype
| LetExp of dec * exp
and dec = ValDec of var * exp
| SeqDec of dec * dec
The HS elaboration of this declaration is given in Figure 5, using
ML-like syntax for readability. To construct a value of one of these
datatypes, a program must use the corresponding in function; these
functions each take an element of the sum type that is the "un-
rolling" of the datatype and produce a value of the datatype. More
concretely, we implement the constructors for exp and dec as follows
Notice that the types exp and dec are held abstract by the opaque
signature ascription. This captures the generativity of datatypes,
since the abstraction prevents ExpDec.exp and ExpDec.dec from
being judged equal to any other types. However, as we mentioned
in Section 1, this abstraction also prevents inlining of the in and
structure ExpDec :> sig
type exp
type dec
val exp in : var
val exp out : exp -> var
val dec in : (var * exp)
(dec * dec) -> dec
val dec out : dec -> (var * exp)
(dec * dec)
struct
fun exp in
fun exp out
fun dec in
fun dec out
Figure
5. Harper-Stone Elaboration of exp-dec Example
out functions: for example, if we attempt to inline exp in in the
definition of VarExp above, we get
but this is ill-typed outside of the ExpDec module because the fact
that exp is a recursive type is not visible. Thus performing inlining
on well-typed code can lead to ill-typed code, so we say that inlining
across abstraction boundaries is not type-preserving and therefore
not an acceptable strategy for a typed compiler. The problem
is that since we cannot inline in and out functions, our compiler
must pay the run-time cost of a function call every time a value
of a datatype is constructed or case-analyzed. Since these operations
occur very frequently in SML code, this performance penalty
is significant.
One strategy that can alleviate this somewhat is to hold the implementation
of a datatype abstract during elaboration, but to expose
its underlying implementation after elaboration to other code defined
in the same compilation unit. Calls to the constructors of
a locally-defined datatype can then be safely inlined. In the setting
of whole-program compilation, this approach can potentially
eliminate constructor call overhead for all datatypes except those
appearing as arguments to functors. However, in the context of separate
compilation, the clients of a datatype generally do not have
access to its implementation, but rather only to the specifications
of its constructors. As we shall see in Section 3, the specifications
of a datatype's constructors do not provide sufficient information to
correctly predict how the datatype is actually implemented, so the
above compilation strategy will have only limited success in a true
separate compilation setting.
2.3 Datatypes and Signature Matching
Standard ML makes an important distinction between datatype dec-
larations, which appear at the top level or in structures, and datatype
specifications, which appear in signatures. As we have seen, the HS
interpretation elaborates datatype declarations as opaquely sealed
structures; datatype specifications are translated into specifications
of structures. For example, the signature
datatype intlist = Nil
| Cons of int * intlist
contains a datatype specification, and elaborates as follows:
struct Intlist : sig
type intlist
val intlist in :
intlist -> intlist
val intlist out :
intlist intlist
structure M will match S if M contains a structure Intlist of
the appropriate signature. 1 In particular, it is clear that the structure
definition produced by the HS interpretation for the datatype
intlist defined in Section 1 has this signature, so that datatype
declaration matches the specification above.
What is necessary in general for a datatype declaration to match a
specification under this interpretation? Since datatype declarations
are translated as opaquely sealed structures, and datatype specifications
are translated as structure specifications, matching a datatype
declaration against a spec boils down to matching one signature-
the one opaquely sealing the declaration structure-against another
signature.
Suppose we wish to know whether the signature S matches the signature
T; that is, whether a structure with signature S may also be
given the signature T. Intuitively, we must make sure that for every
specification in T there is a specification in S that is compatible
with it. For instance, if T contains a value specification of the form
t, then S must also contain a specification val
t. For an abstract type specification of the form type t
occurring in T, we must check that a specification of t also appears
in furthermore, if the specification in S is a transparent one, say
imp , then when checking the remainder of the specifications
in T we may assume in both signatures that imp . Transparent
type specifications in T are similar, but there is the added
requirement that if the specification in T is type spec and the
specification in S is type imp must be
equivalent.
Returning to the specific question of datatype matching, a specification
of the form
datatype
(where the t i may be sum types) elaborates to a specification of a
structure with the following signature:
sig
val
val t n in : t n -> t n
val t
only datatypes to match datatype spec-
ifications, so the actual HS elaboration must use a name for the
datatype that cannot be guessed by a programmer.
structure ExpDec :> sig
(* . specifications for in and out functions
same as before . *)
(* . same structure as before . *)
Figure
6. The Transparent Elaboration of Exp and Dec
In order to match this signature, the structure corresponding to a
datatype declaration must define types named t 1 , ., t n and must
contain in and out functions of the appropriate type for each.
(Note that in any structure produced by elaborating a datatype declaration
under this interpretation, the t i 's will be abstract types.)
Thus, for example, if m # n then the datatype declaration
datatype . and
matches the above specification if and only if s
since this is necessary and sufficient for the types of the in and out
functions to match for the types mentioned in the specification.
3 A Transparent Interpretation of Datatypes
A natural approach to enabling the inlining of datatypes in a type-preserving
compiler is to do away with the generative semantics of
datatypes. In the context of the HS interpretation, this corresponds
to replacing the abstract type specification in the signature of a
datatype module with a transparent type definition, so we call this
modified interpretation the transparent interpretation of datatypes
(TID).
3.1 Making Datatypes Transparent
The idea of the transparent interpretation is to expose the implementation
of datatypes as recursive sum types during elaboration,
rather than hiding it. In our expdec example, this corresponds to
changing the declaration shown in Figure 5 to that shown in Figure
6 (we continue to use ML-like syntax for readability).
Importantly, this change must extend to datatype specifications as
well as datatype declarations. Thus, a structure that exports a
datatype must export its implementation transparently, using a signature
similar to the one in the figure-otherwise a datatype inside
a structure would appear to be generative outside that structure, and
there would be little point to the new interpretation.
As we have mentioned before, altering the interpretation of
datatypes to expose their implementation as recursive types really
creates a new language, which is neither a subset nor a superset
of Standard ML. An example of the most obvious difference can
be seen in Figure 7. In the figure, two datatypes are defined by
seemingly identical declarations. In SML, because datatypes are
generative, the two types List1.t and List2.t are distinct; since
the variable l has type List1.t but is passed to List2.Cons,
which expects List2.t, the function switch is ill-typed. Under
the transparent interpretation, however, the implementations of both
datatypes are exported transparently as -a.unit + int * a. Thus
under this interpretation, List1.t and List2.t are equal and so
switch is a well-typed function.
It is clear that many programs like this one fail to type-check in
SML but succeed under the transparent interpretation; what is less
structure struct
datatype Cons of int * t
structure struct
datatype Cons of int * t
fun switch
| switch (List1.Cons
obvious is that there are some programs for which the opposite is
true. We will discuss two main reasons for this.
3.2 Problematic Datatype Matchings
Recall that according to the HS interpretation, a datatype matches
a datatype specification if the types of the datatype's in and out
functions match the types of the in and out functions in the speci-
fication. (Note: the types of the out functions match if and only if
the types of the in functions match, so we will hereafter refer only
to the in functions.) Under the transparent interpretation, how-
ever, it is also necessary that the recursive type implementing the
datatype match the one given in the specification. This is not a trivial
requirement; we will now give two examples of matchings that
succeed in SML but fail under the transparent interpretation.
3.2.1 A Simple Example
A very simple example of a problematic matching is the following.
Under the opaque interpretation, matching the structure
struct
datatype of u * u | B of int
against the signature
sig
datatype of v | B of int
amounts to checking that the type of the in function for u defined
in the structure matches that expected by the signature once u *
u has been substituted for v in the signature. (No definition is
substituted for u, since it is abstract in the structure.) After sub-
stitution, the type required by the signature for the in function
is which is exactly the type of the function
given by the structure, so the matching succeeds.
Under the transparent interpretation, however, the structure defines
u to be u imp
int but the signature specifies u as
int. In order for matching to succeed, these two types must
be equivalent after we have substituted u imp * u imp for v in the spec-
ification. That is, it is required that
Observe that the type on the right is none other than
-a.expand(u imp ). (Notice also that the bound variable a does not
appear free in the body of this -type. Hereafter we will write such
types with a wildcard in place of the type variable to indicate that
it is not used in the body of the -.) This equivalence does not hold
for iso-recursive types, so the matching fails.
3.2.2 A More Complex Example
Another example of a datatype matching that is legal in SML but
fails under the transparent interpretation can be found by reconsidering
our running example of exp and dec. Under the opaque
interpretation, a structure containing this pair of datatypes matches
the following signature, which hides the fact that exp is a datatype:
sig
type exp
datatype
| SeqDec of dec * dec
When this datatype specification is elaborated under the transparent
interpretation, however, the resulting IL signature looks like:
sig
type exp
where dec spec
a. Elaboration of the declarations
of exp and dec, on the other hand, produces the structure in
Figure
6, which has the signature:
sig
where we define
exp imp
dec imp
Matching the structure containing the datatypes against the signature
can only succeed if dec spec # dec imp (under the assumption
that exp # exp imp ). This equivalence does not hold because the
two -types have different numbers of components.
3.3 Problematic Signature Constraints
The module system of SML provides two ways to express sharing
of type information between structures. The first, where type,
modifies a signature by "patching in" a definition for a type the
signature originally held abstract. The second, sharing type, asserts
that two or more type names (possibly in different structures)
refer to the same type. Both of these forms of constraints are restricted
so that multiple inconsistent definitions are not given to a
single type name. In the case of sharing type, for example, it
is required that all the names be flexible, that is, they must either
be abstract or defined as equal to another type that is abstract. Under
the opaque interpretation, datatypes are abstract and therefore
flexible, meaning they can be shared; under the transparent inter-
pretation, datatypes are concretely defined and hence can never be
shared. For example, the following signature is legal in SML:
type s
datatype
type s
datatype
sharing type
We can write an equivalent signature by replacing the
sharing type line with where type which is
also valid SML. Neither of these signatures elaborates successfully
under the transparent interpretation of datatypes, since under that
interpretation the datatypes are transparent and therefore ineligible
for either sharing or where type.
Another example is the following signature:
signature
type s
val
datatype
sharing type
(Again, we can construct an analogous example with where type.)
Since the name B.t is flexible under the opaque interpretation but
not the transparent, this code is legal SML but must be rejected
under the transparent interpretation.
3.4 Relaxing Recursive Type Equivalence
We will now describe a way of weakening type equivalence (i.e.,
making it equate more types) so that the problematic datatype
matchings described in Section 3.2 succeed under the transparent
interpretation. (This will not help with the problematic sharing constraints
of Section 3.3.) The ideas in this section are based upon
the equivalence algorithm adopted by Shao [8] for the FLINT/ML
compiler.
To begin, consider the simple u-v example of Section 3.2.1. Recall
that in that example, matching the datatype declaration against the
spec required proving the equivalence
where the type on the right-hand side is just - . expand(u imp ). By
simple variations on this example, it is easy to show that in general,
for the transparent interpretation to be as permissive as the opaque,
the following recursive type equivalence must hold:
We refer to this as the boxed-unroll rule. It says that a recursive type
is equal to its unrolling "boxed" by a -. An alternative formulation,
equivalent to the first one by transitivity, makes two recursive types
equal if their unrollings are equal, i.e.:
Intuitively, this rule is needed because datatype matching succeeds
under the opaque interpretation whenever the unrolled form of the
datatype implementation equals the unrolled form of the datatype
spec (because these are both supposed to describe the domain of
the in function).
Although the boxed-unroll equivalence is necessary for the transparent
interpretation of datatypes to admit all matchings admitted
by the opaque one, it is not sufficient; to see this, consider the problematic
exp-dec matching from Section 3.2.2. The problematic
constraint in that example is:
dec # spec # dec imp
where dec (substituting exp imp for exp
in dec imp has no effect, since the variable does not appear free).
Expanding the definitions of these types, we get the constraint:
-a.var * exp imp + a * a #
The boxed-unroll rule is insufficient to prove this equivalence. In
order to apply boxed-unroll to prove these two types equivalent, we
must be able to prove that their unrollings are equivalent, in other
words that
var * exp imp
var * exp imp dec imp * dec imp
But we cannot prove this without first proving dec # spec # dec imp ,
which is exactly what we set out to prove in the first place! The
boxed-unroll rule is therefore unhelpful in this case.
The trouble is that proving the premise of the boxed-unroll rule (the
equivalence of expand(d 1 ) and expand(d 2 may require proving
the conclusion (the equivalence of d 1 and d 2 ). Similar problems
have been addressed in the context of general equi-recursive types.
In that setting, deciding type equivalence involves assuming the
conclusions of equivalence rules when proving their premises [1, 2].
Applying this idea provides a natural solution to the problem discussed
in the previous section. We can maintain a "trail" of type-
equivalence assumptions; when deciding the equivalence of two recursive
types, we add that equivalence to the trail before comparing
their unrollings.
Formally, the equivalence judgement itself becomes G;A # s # t,
where A is a set of assumptions, each of the form t 1 # t 2 . All
the equivalence rules in the static semantics must be modified to
account for the trail. In all the rules except those for recursive types,
the trail is simply passed unchanged from the conclusions to the
premises. There are two new rules that handle recursive types:
The first rule allows an assumption from the trail to be used; the second
rule is an enhanced form of the boxed-unroll rule that adds the
conclusion to the assumptions of the premise. It is clear that the trail
is just what is necessary in order to resolve the exp-dec anomaly
described above; before comparing the unrollings of dec spec and
dec imp , we add the assumption dec spec # dec imp to the trail; we
then use this assumption to avoid the cyclic dependency we encountered
before.
In fact, the trailing version of the boxed-unroll rule is sufficient
to ensure that the transparent interpretation accepts all datatype
matchings accepted by SML. To see why, consider a datatype spec-
ification
datatype
(where the t i may be sum types in which the t i may occur).
Suppose that some implementation matches this spec under the
opaque interpretation; the implementation of each type t i must be
a recursive type d i . Furthermore, the type of the t i in function
given in the spec is t i # t i , and the type of its implementation is
. Because the matching succeeds under the opaque
interpretation, we know that these types are equal after each d i has
been substituted for t i ; thus we know that expand(d i
each i.
When the specification is elaborated under the transparent interpre-
tation, however, the resulting signature declares that the implementation
of each t i is the appropriate projection from a recursive bundle
determined by the spec itself. That is, each t i is transparently
specified as - i ( # t).( # t). In order for the implementation to match this
transparent specification, it is thus sufficient to prove the following
theorem:
Theorem 1 If #i # 1.n, G; /
1.n, G; /
Proof: See Appendix A. #
While we have given a formal argument why the trailing version
of the boxed-unroll rule is flexible enough to allow the datatype
matchings of SML to typecheck under the transparent interpreta-
tion, we have not been precise about how maintaining a trail relates
to the rest of type equivalence. In fact, the only work regarding
trails we are aware of is the seminal work of Amadio and
Cardelli [1] on subtyping equi-recursive types, and its later coinductive
axiomatization by Brandt and Henglein [2], both of which
are conducted in the context of the simply-typed l-calculus. Our
trailing boxed-unroll rule can be viewed as a restriction of the corresponding
rule in Amadio and Cardelli's trailing algorithm so that
it is only applicable when both types being compared are recursive
types.
It is not clear, though, how trails affect more complex type systems
that contain type constructors of higher kind, such as Gi-
rard's F w [6]. In addition to higher kinds, the MIL (Middle Intermediate
Language) of TILT employs singleton kinds to model
sharing [13], and the proof that MIL typechecking is
decidable is rather delicate and involved. While we have implemented
the above trailing algorithm in TILT for experimental purposes
(see Section 5), the interaction of trails and singletons is not
well-understood.
As for the remaining conflict between the transparent interpretation
and type sharing, one might argue that the solution is to broaden
SML's semantics for sharing constraints to permit sharing of rigid
components. The problem is that the kind of
sharing that would be necessary to make the examples of Section
3.3 typecheck under the transparent interpretation would require
some form of type unification. It is difficult to determine
where to draw the line between SML's sharing semantics and full
higher-order unification, which is undecidable. Moreover, unification
would constitute a significant change to SML's semantics, disproportionate
to the original problem of efficiently implementing
datatypes.
4 A Coercion Interpretation of Datatypes
In this section, we will discuss a treatment of datatypes based on
coercions. This solution will closely resemble the Harper-Stone
interpretation, and thus will not require the boxed-unroll rule or a
trail algorithm, but will not incur the run-time cost of a function call
at constructor application sites either.
4.1 Representation of Datatype Values
The calculus we have discussed in this paper can be given the usual
structured operational semantics, in which an expression of the
form roll d (v) is itself a value if v is a value. (From here on we
will assume that the metavariable v ranges only over values.) In
fact, it can be shown without difficulty that any closed value of a
datatype d must have the form roll d (v) where v is a closed value
of type expand(d). Thus the roll operator plays a similar role
to that of the inj 1 operator for sum types, as far as the high-level
language semantics is concerned.
Although we specify the behavior of programs in our language with
a formal operational semantics, it is our intent that programs be
compiled into machine code for execution, which forces us to take
a slightly different view of data. Rather than working directly with
high-level language values, compiled programs manipulate representations
of those values. A compiler is free to choose the representation
scheme it uses, provided that the basic operations of the
language can be faithfully performed on representations. For exam-
ple, most compilers construct the value inj 1 (v) by attaching a tag
to the value v and storing this new object somewhere. This tagging
is necessary in order to implement the case construct. In particular,
the representation of any value of type must carry enough
information to determine whether it was created with inj 1 or inj 2
and recover a representation of the injected value.
What are the requirements for representations of values of recursive
It turns out that they are somewhat weaker than for sums.
The elimination form for recursive types is unroll, which (unlike
case) does not need to extract any information from its argument
other than the original rolled value. In fact, the only requirement is
that a representation of v can be extracted from any representation
of roll d (v). Thus one reasonable representation strategy is to represent
roll d (v) exactly the same as v. In the companion technical
report [15], we give a more precise argument as to why this is rea-
sonable, making use of two key insights. First, it is an invariant of
the TILT compiler that the representation of any value fits in a single
machine register; anything larger than 32 bits is always stored
in the heap. This means that all possible complications having to do
with the sizes of recursive values are avoided. Second, we define
representations for values, not types; that is, we define the set of
machine words that can represent the value v by structural induction
on v, rather than defining the set of words that can represent
values of type t by induction on t as might be expected.
The TILT compiler adopts this strategy of identifying the representations
of roll d (v) and v, which has the pleasant consequence that
the roll and unroll operations are "no-ops". For instance, the
untyped machine code generated by the compiler for the expression
roll d (e) need not differ from the code for e alone, since if
the latter evaluates to v then the former evaluates to roll d (v), and
Types
Terms e ::= - | L# a.fold d | L# a.unfold d
| v@( # t;e)
Figure
8. Syntax of Coercions
the representations of these two values are the same. The reverse
happens for unroll.
This, in turn, has an important consequence for datatypes. Since the
in and out functions produced by the HS elaboration of datatypes
do nothing but roll or unroll their arguments, the code generated
for any in or out function will be the same as that of the identity
function. Hence, the only run-time cost incurred by using an in
function to construct a datatype value is the overhead of the function
call itself. In the remainder of this section we will explain how to
eliminate this cost by allowing the types of the in and out functions
to reflect the fact that their implementations are trivial.
4.2 The Coercion Interpretation
To mark in and out functions as run-time no-ops, we use coer-
cions, which are similar to functions, except that they are known
to be no-ops and therefore no code needs to be generated for coercion
applications. We incorporate coercions into the term level
of our language and introduce special coercion types to which they
belong. Figure 8 gives the changes to the syntax of our calculus.
Note that while we have so far confined our discussion to monomorphic
datatypes, the general case of polymorphic datatypes will require
polymorphic coercions. The syntax we give here is essentially
that used in the TILT compiler; it does not address non-uniform
datatypes.
We extend the type level of the language with a type for (possibly
polymorphic) coercions, a value of this type is a coercion
that takes length(# a) type arguments and then can change a
value of type t 1 into one of type t 2 (where, of course, variables
from # a can appear in either of these types). When # a is empty, we
will
Similarly, we extend the term level with the (possibly polymorphic)
coercion values L# a.fold d and L# a.unfold d ; these take the place
of roll and unroll expressions. Coercions are applied to (type
and value) arguments in an expression of the form v@( # t;e); here
v is the coercion, # t are the type arguments, and e is the value to
be coerced. Note that the coercion is syntactically restricted to be
a value; this makes the calculus more amenable to a simple code
generation strategy, as we will discuss in Section 4.3. The typing
rules for coercions are essentially the same as if they were ordinary
polymorphic functions, and are shown in Figure 9.
With these modifications to the language in place, we can elaborate
the datatypes exp and dec using coercions instead of functions to
implement the in and out operations. The result of elaborating
this pair of datatypes is shown in Figure 10. Note that the interface
is exactly the same as the HS interface shown in Section 2 except
that the function arrows (->) have been replaced by coercion arrows
(#). This interface is implemented by defining exp and dec in the
same way as in the HS interpretation, and implementing the in and
out coercions as the appropriate fold and unfold values. The
elaboration of a constructor application is superficially similar to
the opaque interpretation, but a coercion application is generated
instead of a function call. For instance, LetExp(d,e) elaborates as
exp in@(inj 2 (d,e)).
4.3 Coercion Erasure
We are now ready to formally justify our claim that coercions may
be implemented by erasure, that is, that it is sound for a compiler
to consider coercions only as "retyping operators" and ignore them
when generating code. First, we will describe the operational semantics
of the coercion constructs we have added to our internal
language. Next, we will give a translation from our calculus into an
untyped one in which coercion applications disappear. Finally, we
will state a theorem guaranteeing that the translation is safe.
The operational semantics of our coercion constructs are shown in
Figure
11. We extend the class of values with the fold and unfold
coercions, as well as the application of a fold coercion to a value.
These are the canonical forms of coercion types and recursive types
respectively. The two inference rules shown in Figure 11 define
the manner in which coercion applications are evaluated. The evaluation
of a coercion application is similar to the evaluation of a
normal function application where the applicand is already a value.
The rule on the left specifies that the argument is reduced until it is
a value. If the applicand is a fold, then the application itself is a
value. If the applicand is an unfold, then the argument must have a
recursive type and therefore (by canonical forms) consist of a fold
applied to a value v. The rule on the right defines unfold to be the
left inverse of fold, and hence this evaluates to v.
As we have already discussed, the data representation strategy of
TILT is such that no code needs to be generated to compute foldv
from v, nor to compute the result of cancelling a fold with an
unfold. Thus it seems intuitive that to generate code for a coercion
application v@( # t;e), the compiler can simply generate code for e,
with the result that datatype constructors and destructors under the
coercion interpretation have the same run-time costs as Harper and
Stone's functions would if they were inlined. To make this more
precise, we now define an erasure mapping to translate terms of
our typed internal language into an untyped language with no coercion
application. The untyped nature of the target language (and of
machine language) is important: treating v as foldv would destroy
the subject reduction property of a typed language.
Figure
12 gives the syntax of our untyped target language and the
coercion-erasing translation. The target language is intended to be
essentially the same as our typed internal language, except that all
types and coercion applications have been removed. It contains
untyped coercion values fold and unfold, but no coercion application
form. The erasure translation turns expressions with type annotations
into expressions without them (l-abstraction and coercion
values are shown in the figure), and removes coercion applications
so that the erasure of v@( # t;e) is just the erasure of e. In particular,
for any value v, v and foldv are identified by the translation, which
is consistent with our intuition about the compiler. The operational
semantics of the target language is analogous to that of the source.
The language with coercions has the important type-safety property
that if a term is well-typed, its evaluation does not get stuck. An
important theorem is that the coercion-erasing translation preserves
the safety of well-typed programs:
Theorem 2 (Erasure Preserves Safety) If G # e : t, then e - is
safe. That is, if e - # f , then f is not stuck.
Proof: See the companion technical report [15]. #
G,# a # d type
G,# a # d type
Figure
9. Typing Rules for Coercions
structure ExpDec :> sig
type exp
type dec
val exp in : var
val exp out : exp # var
val dec in : (var * exp)
val dec out : dec # (var * exp)
struct
val exp in = foldexp
val exp out = unfoldexp
val dec in = fold dec
val dec out = unfold dec
Figure
10. Elaboration of exp and dec Under the Coercion Interpretation
Values v ::= - | L# a.fold t | L# a.unfold t | (L# a.fold t )@(# s;v)
Figure
11. Operational Semantics for Coercions
fold | unfold
(L# a.fold d
(L# a.unfold d unfold
Figure
12. Target Language Syntax; Type and Coercion Erasure
Test
life 8.233 2.161 2.380
leroy 5.497 4.069 3.986
boyer 2.031 1.559 1.364
simple 1.506 1.003 0.908
tyan 16.239 8.477 9.512
msort 1.685 0.860 1.012
pia 1.758 1.494 1.417
lexgen 11.052 5.599 5.239
Figure
13. Performance Comparison
Note that the value restriction on coercions is crucial to the soundness
of this "coercion erasure" interpretation. Since a divergent expression
can be given an arbitrary type, including a coercion type,
any semantics in which a coercion expression is not evaluated before
it is applied fails to be type-safe. Thus if arbitrary expressions
of coercion type could appear in application positions, the compiler
would have to generate code for them. Since values cannot diverge
or have effects, we are free to ignore coercion applications when
we generate code.
Performance
To evaluate the relative performance of the different interpretations
of datatypes we have discussed, we performed experiments using
three different versions of the TILT compiler: one that implements
a na-ve Harper-Stone interpretation in which the construction of a
non-locally-defined datatype requires a function call 2 ; one that implements
the coercion interpretation of datatypes; and one that implements
the transparent interpretation. We compiled ten different
benchmarks using each version of the compiler; the running times
for the resulting executables (averaged over three trials) are shown
in
Figure
13. All tests were run on an Ultra-SPARC Enterprise
server; the times reported are CPU time in seconds.
The measurements clearly indicate that the overhead due to
datatype constructor function calls under the na-ve HS interpretation
is significant. The optimizations afforded by the coercion and
transparent interpretations provide comparable speedups over the
opaque interpretation, both on the order of 37% (comparing the total
running times). Given that, of the two optimized approaches,
only the coercion interpretation is entirely faithful to the semantics
of SML, and since the theory of coercion types is a simpler and
more orthogonal extension to the HS type theory than the trailing
algorithm of Section 3.4, we believe the coercion interpretation is
the more robust choice.
6 Related Work
Our trail algorithm for weakened recursive type equivalence is
based on the one implemented by Shao in the FLINT intermediate
language of the Standard ML of New Jersey compiler [12]. The
typing rules in Section 3.4 are based on the formal semantics for
FLINT given by League and Shao [8], although we are the first to
give a formal argument that their trailing algorithm actually works.
It is important to note that SML/NJ only implements the transpar-
In particular, we implement the strategy described at the end of
Section 2.2.
ent interpretation internally: the opaque interpretation is employed
during elaboration, and datatype specifications are made transparent
only afterward. As the examples of Section 3.3 illustrate, there
are programs that typecheck according to SML but not under the
transparent interpretation even with trailing equivalence, so it is unclear
what SML/NJ does (after elaboration) in these cases. As it
happens, the final example of Section 3.3, which is valid SML, is
rejected by the SML/NJ compiler.
Curien and Ghelli [4] and Crary [3] have defined languages that use
coercions to replace subsumption rules in languages with subtyp-
ing. Crary's calculus of coercions includes roll and unroll for
recursive types, but since the focus of his paper is on subtyping he
does not explore the potential uses of these coercions in detail. Nev-
ertheless, our notion of coercion erasure, and the proof of our safety
preservation theorem, are based on Crary's. The implementation of
Typed Assembly Language for the x86 architecture (TALx86) [10]
allows operands to be annotated with coercions that change their
types but not their representations; these coercions include roll
and unroll as well as introduction of sums and elimination of universal
quantifiers.
Our intermediate language differs from these in that we include coercions
in the term level of the language rather than treating them
specially in the syntax. This simplifies the presentation of the coercion
interpretation of datatypes, and it simplified our implementation
because it required a smaller incremental change from earlier
versions of the TILT compiler. However, including coercions in the
term level is a bit unnatural, and our planned extension of TILT
with a type-preserving back-end will likely involve a full coercion
calculus.
7 Conclusion
The generative nature of SML datatypes poses a significant challenge
for efficient type-preserving compilation. Generativity can
be correctly understood by interpreting datatypes as structures that
hold their type components abstract, exporting functions that construct
and deconstruct datatype values. Under this interpretation,
the inlining of datatype construction and deconstruction operations
is not type-preserving and hence cannot be performed by a typed
compiler such as TILT.
In this paper, we have discussed two approaches to eliminating the
function call overhead in a type-preserving way. The first, doing
away with generativity by making the type components of datatype
structures transparent, results in a new language that is different
but neither more nor less permissive than, Standard ML.
Some of the lost expressiveness can be regained by relaxing the
rules of type equivalence in the intermediate language, at the expense
of complicating the type theory. The fact that the transparent
interpretation forbids datatypes to appear in sharing type or
where type signature constraints is unfortunate; it is possible that
a revision of the semantics of these constructs could remove the
restriction.
The second approach, replacing the construction and deconstruction
functions of datatypes with coercions that may be erased during
code generation, eliminates the function call overhead without
changing the static semantics of the external language. However,
the erasure of coercions only makes sense in a setting where a
recursive-type value and its unrolling are represented the same at
run time. The coercion interpretation of datatypes has been implemented
in the TILT compiler.
Although we have presented our analysis of SML datatypes in the
context of Harper-Stone and the TILT compiler, the idea of "co-
ercion types" is one that we think is generally useful. Terms that
serve only as retyping operations are pervasive in typed intermediate
languages, and are usually described as "coercions" that can be
eliminated before running the code. However, applications of these
informal coercions cannot in general be erased if there is no way to
distinguish coercions from ordinary functions by their types; this is
a problem especially in the presence of true separate compilation.
Our contribution is to provide a simple mechanism that permits coercive
terms to be recognized as such and their applications to be
safely eliminated, without requiring significant syntactic and metatheoretic
overhead.
--R
Subtyping recursive types.
Coinductive axiomatization of recursive type equality and subtyping.
Typed compilation of inclusive subtyping.
Coherence of sub- sumption
Recursive subtyping revealed.
Formal semantics of the FLINT intermediate language.
David Mac- Queen
A realistic typed assembly language.
Implementing the TILT internal language.
An overview of the FLINT/ML compiler.
Deciding type equivalence in a language with singleton kinds.
Typed compilation of recursive datatypes.
--TR
Subtyping recursive types
Coinductive axiomatization of recursive type equality and subtyping
Deciding type equivalence in a language with singleton kinds
Typed compilation of inclusive subtyping
Recursive subtyping revealed
--CTR
Derek Dreyer, Recursive type generativity, ACM SIGPLAN Notices, v.40 n.9, September 2005
Vijay S. Menon , Neal Glew , Brian R. Murphy , Andrew McCreight , Tatiana Shpeisman , Ali-Reza Adl-Tabatabai , Leaf Petersen, A verifiable SSA program representation for aggressive compiler optimization, ACM SIGPLAN Notices, v.41 n.1, p.397-408, January 2006
Dimitrios Vytiniotis , Geoffrey Washburn , Stephanie Weirich, An open and shut typecase, Proceedings of the 2005 ACM SIGPLAN international workshop on Types in languages design and implementation, p.13-24, January 10-10, 2005, Long Beach, California, USA | recursive types;coercions;typed compilation;standard ML |
604189 | A typed interface for garbage collection. | An important consideration for certified code systems is the interaction of the untrusted program with the runtime system, most notably the garbage collector. Most certified code systems that treat the garbage collector as part of the trusted computing base dispense with this issue by using a collector whose interface with the program is simple enough that it does not pose any certification challenges. However, this approach rules out the use of many sophisticated high-performance garbage collectors. We present the language LGC, whose type system is capable of expressing the interface of a modern high-performance garbage collector. We use LGC to describe the interface to one such collector, which involves a substantial amount of programming at the type constructor level of the language. | Introduction
In a certified code system, executable programs shipped
from a producer to a client are accompanied by certificates
This material is based on work supported in part by NSF grants CCR-
9984812 and CCR-0121633, and by an NSF fellowship. Any opinions,
findings, and conclusions or recommendations in this publication are those
of the authors and do not reflect the views of this agency.
Permission to make digital or hard copies of all or part of this work
for personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and that
copies bear this notice and the full citation on the first page. To copy
otherwise, to republish, to post on servers or to redistribute to lists, requires
prior specific permission and/or a fee.
TLDI'03, January 18, 2003, New Orleans, Louisiana, USA.
ACM 1-58113-649-8/03/0001. $5.00.
that provide evidence of their safety. The validity of a cer-
tificate, which can be mechanically verified by the client,
implies that the associated program is safe to execute. Examples
of certified code frameworks include Typed Assembly
Language [6] and Proof-Carrying Code [7, 8].
Most past research on certified code has focused on the
safety of the untrusted mobile code itself. However, it is also
important to consider the safety implications of the runtime
system to which that code is linked. There are two options
for dealing with this issue. One choice is to treat the runtime
system as part of the untrusted code, and certify its safety.
The other choice is to simply assume the runtime system
is correct-i.e., treat it as part of the trusted computing
base that includes the certificate verifier. Of course, even if
the runtime itself is assumed correct, the interaction of the
program with the runtime must be certified to conform to
the appropriate interface.
An important part of the runtime system for many modern
languages is the garbage collector. Frameworks in which
the runtime system must be certified must use certification
technology capable of proving a garbage collector safe. Work
on this approach includes that of Wang and Appel [10, 9]
and of Monnier et al. [4]. Of the systems that take the
second approach, many assume the existence of a trusted
conservative garbage collector; the advantage of this is that
the application interface of a conservative collector is so simple
that it can almost be ignored. There are performance
benefits to be gained by using a more precise collector; how-
ever, the interface of such a collector is more subtle, and the
issue of certifying program conformance to this interface can
no longer be ignored. In order to use a better garbage collector
for certified code applications, the interface of such a
collector must be described and expressed in a type system.
The topic of this paper is the specification of the interface
for a particular, modern garbage collector, namely that
of Cheng and Blelloch [1, 2], implemented in the TILT/ML
runtime system. After informally describing the behavior
of this collector and its interface to a running program, we
present a language whose type system can express this inter-
This language, called LGC, is built up from a simple
stack based language we call LGC - by extension with the
typing constructs necessary to express various elements of
the collector's interface. As we present LGC, we describe the
interface to Cheng's garbage collector, the precise definition
of which involves a substantial amount of programming in
the language of type constructors. Finally, we discuss the
expressiveness of the LGC language.
int G() {
. G()
(newer frames)
F's other locals
G's arguments
G's other locals
F's frame
G's frame
(older frames)
F's arguments
return address
return address (to F)
int F() {
Figure
1: Frames on a Stack
1.1 The Garbage Collector's Interface
The first part of a garbage collector's job is to find the root
set-those registers, globals and stack locations that contain
pointers into the heap. This task is the part of garbage collection
that requires compiler cooperation, and the part that
makes assumptions about the behavior of the program. In
this section we describe a simplified form of the root-finding
algorithm used in TILT/ML. We will ignore complications
such as an optimization for callee-save registers, and assume
that all roots are stored on the stack. We can therefore ignore
the additional work of finding roots among the registers
or global variables.
The garbage collector assumes that the stack is laid out
as a sequence of frames, each belonging to the particular
function that created it. Each frame contains a number
of data slots (including function arguments, local variables,
and temporaries), as well as a return address. A section
of a stack is illustrated in Figure 1. As usual, the stack is
shown growing downwards. In the figure, the function F has
called the function G; thus, the return address position in
G's frame will contain a location somewhere inside the code
of F. In fact, the return address found in G's frame uniquely
identifies the point in the program from which G was called,
and therefore also determines the layout of the frame above
its own.
The garbage collector uses this property to "parse" the
stack. When the program is compiled, the compiler emits
type information that is collected by the runtime system into
a GC table, which is a mapping from return addresses (iden-
tifying function call sites) to information about the stack
frame of the function containing the call site. When the
collector begins looking for roots, the newest frame on the
stack is that of the collector itself; the return address in this
frame can be looked up in the GC table to find a description
of the next frame, which belongs to the untrusted program.
The collector then moves through the stack, performing the
following steps for each frame:
1. Using the return address from below the frame being
examined, find the GC table entry that describes this
frame.
2. Using this GC table entry, determine the following information
. The locations of pointers in the current frame.
These are roots.
. The location of the return address within the current
frame.
. The size of the current frame.
3. Using this information, find the start of the next frame
and look up its GC table entry.
These steps are repeated until the base of the stack is
reached.
Clearly, a correct GC table is essential for the operation
of the garbage collector. An incorrect value in the table
could lead to a variety of errors, from a single root pointer
being ignored to derailment of the entire stack-parsing pro-
cess. Put another way, it is crucial that the program structure
its use of the stack consistently with the frame descriptions
in the GC table. In this paper, we will present a language
in which the stack's layout can be precisely controlled,
giving us the ability to guarantee that the structure of the
stack during collection will be consistent with the collector's
expectations.
2 A Language With a GC Interface
The main goal of this paper is to describe a type system
in which the shape of the program stack can be made to
fit the pattern expected by the garbage collector; we must
therefore have a language in which stack manipulation is explicit
and which is expressive enough to describe the stack in
very precise terms. In this section, we begin to describe our
language, which we call LGC. We start with a simple core
language we call LGC - , which is a simple stack-based language
that does not have the sophisticated type constructs
to support garbage collection; we will then discuss the refinements
necessary to enforce compliance with a GC table.
The syntax and typing rules for full LGC are given in Appendix
A.
2.1 The Core Language
The syntax of LGC - is given in Figure 2. The language is
essentially a polymorphic #-calculus with integers, booleans,
tuples and sum types, plus a stack that is handled much
the same way as in stack-based typed assembly language
(STAL) [5]. The details of the language that do not directly
relate to garbage collection are not particularly important
for our purposes-indeed, there are many possible language
designs that would work equally well-and so we will only
discuss those aspects briefly here. Here and throughout
the paper, we consider expressions that di#er only in the
names of bound variables to be identical, and we denote
by E[E1 , . , En/X1 , . , the result of the simultaneous
capture-avoiding substitution of E1 through En for the variables
X1 through Xn in E.
Programs An LGC - program consists of a sequence of mutually
recursive code block definitions, followed by an ex-
pression. Each block has the form #; sp:#).e, indicating
that it must be instantiated with some number and kind of
type constructor arguments specified by #, and then may be
invoked whenever the stack has type #; invoking the block
results in the evaluation of e. Notice that no data other
than the stack itself is passed into a block; this means that
Kinds k ::= T | ST
Constructors c, # | int | bool | code(# 0 | #1 - #n | #1
| null |
Values v ::= x | n | b | # | v[c] | pack #c, v# as #
Expressions e ::= halt v | jump v | if v then e1 else e2 | case v of inj 1 x1 # e1 | - | inj n xn # en | let d in e
Declarations d ::=
Blocks B ::= #; sp:#).e
Programs P ::= letcode
Type Contexts #:k
Contexts #, x:#
Memory Types #
Figure
2: Syntax of LGC -
all function arguments and results must be passed on the
stack. The return address in a function call must also be
passed on the stack, leading to a continuation-passing style
for programs. Also, because all the code blocks in a program
appear at the top level, programs must undergo closure conversion
before translation into LGC - .
Expressions The body of each block is an expression. The
expressions in LGC - include a halt instruction which stops
the computation, a jump instruction which takes a code label
and transfers control to the corresponding block, an if-
then-else construct, case analysis on sums and a form of
let-binding that performs one operation, possibly binding
the result to a variable, and continues with another expres-
sion. The bindings that may occur in a let are a simple
value binding, arithmetic operations
injection into sum types allocation of tuples
projection from tuples
unpacking of values of existential type (#,
which binds a new constructor variable # in addition to the
variable x) and stack operations. The stack operations are
reading writing (sp(i) := v), pushing of a value
onto the stack (push v) and popping of i values o# the stack
(pop i).
The syntactic values in our language are variables
(x), numerals (n), boolean constants (b), code labels
(#), instantiations of polymorphic values with constructors
(v[c]), and packages containing a constructor and a value
(pack #c, v# as # ), which have existential type.
Types and Kinds Our type theory has two kinds, T and
ST, which classify constructors. Constructors of kind T are
called types, and describe values; constructors of kind ST are
called stack types and describe the stack. The constructors
themselves include constructor variables (#), the base types
int and bool, code label types (code(# 0), n-ary
products (#1 - #n ) and sums (#1
types (#:k.# ), the empty stack type null and non-empty
stack types of the form
#. (In STAL one writes these as
nil and #, respectively, but in LGC we prefer to use this
ML-like list notation for actual lists of constructors.) The
metavariable c will be used to range over all constructors;
we will also use the names # and # when we intend that the
constructor being named is a type or a stack type, respec-
tively. We will extend the kind and constructor levels of the
type system later in the paper in order to more precisely
describe the shape of the stack.
Static Semantics The typing rules for this simple language
are generally the expected ones. Due to space considera-
tions, we will not present them all here; they are generally
similar to those for the full LGC language, whose rules are
in
Appendix
A. We will, however, discuss some of the typing
rules before turning to examine how programs in this
language interact with a garbage collector. One of the simplest
typing rules for expressions in the language is the one
for the unconditional jump instruction:
This rule states that it is legal to jump to a fully-instantiated
code pointer (that is, one that does not expect any more constructor
arguments) provided the stack type # expected by
the code pointer is the same as the current stack type. To
jump to a polymorphic code pointer, one must first instantiate
it by applying it the appropriate number and kind of
type arguments. The rule for doing this is the following:
Another simple typing rule is the one for binding a value
to a variable:
Read algorithmically, this rule can be understood as follows:
To check that the expression let in e is valid, first find
the type # of the value v; then check that e is valid under
the assumption that x has type # .
2.2 Requirements for Garbage Collection
For the goal of certifying interaction with a garbage collec-
tor, LGC - is unacceptably simplistic. In fact, the syntax
and typing rules we have discussed so far appear to ignore
the collector completely. In this section, we begin to identify
the specific shortcomings of the language; once we have done
this, the remainder of the paper will be devoted to adding
the necessary refinements to the language, resulting in the
full LGC type system.
As we have already explained, if a program in our language
is to work properly with a garbage collector, the collector
must be able to find the roots whenever it is invoked.
In practice, the garbage collector is usually invoked when a
program attempts to create a new object in the heap and
there is insu#cient space available. The expressions in our
language that perform allocation are tuple formation and
injection into sum types; this means that the garbage collector
may need to be able to find the root set during evaluation
of an expression of the form let
or let in e. (We will only discuss tuples, since
the modifications necessary for sums are exactly analogous.)
A na-ve version of the typing rule for tuple allocation would
be the following:
#, x:#1 - #n ); sp:# e
There are two main changes that must be made to this rule.
First, we must force all the roots to be on the stack where
they can be found by the collector; and second, we must
force the stack to have a structure the collector can parse.
The first problem stems from the fact that we have variables
in our language that are not stack allocated, but we
want to assume for the sake of simplicity that the garbage
collector scans only the stack when looking for roots. A free
occurrence of a variable y in the expression e above could
denote a pointer; that pointer could be used in the evaluation
of e, but if there is no copy of that pointer on the stack,
the garbage collector may not identify it as live. The solution
is to force the program to "dump" the contents of all its
variables to the stack whenever a collection might occur. To
accomplish this, we require the continuation e to be closed
except for the result x of the allocation. The rule now looks
like this:
Note that a more realistic abstract machine would have registers
instead of variables; in order to support GC table certification
on such a machine we would have to apply the
techniques we discuss in the remainder of this paper to the
register file as well as the stack. This seems straightforward,
but for the sake of simplicity we will limit our discussion in
this paper to a collector that can only find roots in the stack.
The second modification that must be made to the allocation
rule is significantly more di#cult to formulate. In
fact, the rest of this paper is devoted to adding a single additional
premise to the rule, namely one that stipulates that
the stack type # is parsable. That is, we must describe the
structure that the stack must have in order to be scanned by
the collector, and express that structure in a way that can
be enforced by the type system. The type system of LGC -
is not up to this task, so before continuing we must endow
it with the expressive power to meet our needs.
2.3 Enriching the Constructor Language
In order to be able to give a typing constraint in the allocation
rule that precisely describes the required structure of
the stack, we must enrich the constructor level of our lan-
guage. For this purpose, we add a number of constructs from
Kinds k ::= - | j | k1 # k2 | k1 - k2 | k1
| -j.k | 1
Constructors c ::= - | #:k.c | c1 c2 | #c1 , c2 # i c
|
|
| # | unit | void | -(c) | +(c)
Figure
3: Kinds and Constructors from LX
Crary and Weirich's LX type theory [3]. These additions to
our language are shown in Figure 3. In addition to function
spaces, products and sums over kinds (k1 # k2 , k1 - k2 ,
k1 +k2 ), LX provides inductive kinds -j.k, where j is a kind
variable that may appear in positive positions within k. At
the type constructor level, we change the syntax of product
and sum types to -(c) and +(c), where in each case c is a
constructor of kind -j.1+T-j and represents a list of types.
To keep the notation for LGC simple, we allow the syntax
from LGC - to serve as shorthand, defined as follows:
(The analogous notation is used for sums.) Finally, we have
a kind 1 whose sole element is the constructor #, and we add
the types unit and void to the language. The type unit has
the sole element #, while the type void contains no values.
The introduction forms and elimination forms for arrows,
sums and products at the constructor level are the usual
ones; inductive kinds are introduced with a fold construct
and eliminated with primitive recursion constructors of the
form pr(j, #:k, #:j # k # , c). If well-formed, this constructor
will be a function of kind -j.k # k # [-j.k/j]; c is the body of
the function, in which # may appear as the parameter and
# is the name of the function itself to be used for recursive
calls.
For example, if we define the kind
senting the natural numbers), then we can define the function
iter as follows:
The constructor iter is a function taking a function from
types to types, a type and a natural number, and returning
the result of iterating the function on the given type the
specified number of times.
Clearly, the pr notation is somewhat unwieldy to read
and write, so we will use an ML-like notation for working in
the LX constructor language. We will, for many purposes,
combine the notions of inductive and sum kind and define
datakinds, akin to ML's datatypes. For example, we could
write the definition of N above as follows:
datakind
The function iter would be more readably expressed in ML
curried function notatin this way:
| iter # (Succ #(iter #)
We will often write functions in this style, being careful only
to write functions that can be expressed in the primitive
recursion notation of LX. To further simplify the presenta-
tion, we will also use the familiar ML constructors list and
option to stand for the analogous datakinds.
2.4 Approaching Garbage Collection
The language LX was originally designed for intensional type
analysis. The basic methodology was to define a datakind
of analyzable constructors, which we will call TR (for "type
representation"), a function interp : TR # T to turn a
constructor representation (suitable for analysis) into an actual
type (suitable for adorning a variable binding), and a
to turn a constructor into the type
of a value that represented it at run time. In addition to
explaining the somewhat mysterious operation of run-time
type analysis in more primitive terms, this had the e#ect
of isolating a particular subset of types for analysis: only
those types that appeared in the image of the interp mapping
could be passed or analyzed at run time. For garbage
collection, we want to do something similar: we want to isolate
the set of stack types that are structured such that the
collector can parse the stack using the algorithm outlined in
Section 1.1. We can then add the appropriate stack structure
condition to the allocation rule by asserting that the
current stack type lies in that set.
To do this, the remainder of this paper will define the
following LX objects:
1. A datakind SD (for "stack descriptor"), whose elements
will be passed around in our programs in place
of stack types.
2. A datakind DD (for "data descriptor"), whose elements
will be static representations of GC tables. Every program
in our language will designate one particular constructor
to be its static GC table, or SGCT. This constructor
SGCT will have kind DD .
3. A constructor interpS : DD # SD # ST, that will
turn a stack descriptor into a stack type provided that
it only uses stack frames whose shapes are determined
by a particular static GC table. We will be careful to
write interpS such that for any constructors s : SD
stacks of type interpS SGCT s will
always be parsable.
Once we have definitions for all of these, ensuring that the
stack is parsable for garbage collection is simple: if the current
stack type is #, we need to require that there exist
some constructor s : SD such that
course, we do not want the type-checker to have to guess
the appropriate s, so we change the syntax slightly to make
it the programmer's responsibility. The next version of our
allocation rule (modulo the definitions of these new LX ex-
pressions) is:
There will be one more development of this rule in Section
3.1.
In order for this expression of the interface to the garbage
collector in terms of SGCT to guarantee correct programs,
we must be sure that the actual data structure used as the
GC table agrees with its static representation. While LX
is capable of expressing a type for the GC table that guarantees
this, we have chosen a simpler approach. Rather
than forcing the program to provide its own GC table in
both "static" constructor form and "dynamic" value form,
we assume that the type-checker in our certified code system
transforms the static GC table into a real GC table and provides
the latter to the runtime system before the program
starts. Thus, we consider the generation of the GC table itself
from the static representation to be part of the trusted
computing base.
The remaining sections of this paper will present the definitions
of the kinds SD and DD , and will describe the behavior
of interpS and the auxiliary functions needed to define
it. The definition of interpS is nontrivial and involves an unusual
amount of programming at the type constructor level
of the language. The complete code for the special kinds
and constructors used in our GC interface can be found in
Appendix
B.
3 Describing the Stack
Since the collector requires the stack to be structured as a
sequence of frames, our LX representation of the stack type
will be essentially a list of frame descriptors, which we will
represent by constructors of another datakind, called FD. A
frame descriptor must allow two major operations: (1) since
lists of descriptors are passed around in the program instead
of stack types, it must be possible to interpret a descriptor
to get the partial stack type it represents, and (2) since we
are structuring the stack this way so as to ensure agreement
with a GC table, it must be possible to check a descriptor
against an entry in the table. The individual entries in the
static representation of the GC table will be constructors of
a kind called FT , for frame template, that we will also define
shortly.
3.1 Labels and Singletons
As we have mentioned before, a key property of the stack
layout required by the garbage collector is that the return
address of one frame determines, via the GC table, the expected
shape of the next older frame. As a result, in order
for our constraint on the stack's type before a collection to
guarantee proper functioning of the collector, we must ensure
that the value stored in the return address position in
each frame corresponds, via the static GC table, to the type
of the next frame.
To make this happen, we must be able to reason about
labels-i.e., pointers to code-at the constructor level of
our language. We therefore lift label literals from the value
level of the language to the constructor level, and add a new
primitive kind, L, to classify them. In addition, we add a
construct for forming singleton types from labels. Using this
construct, we will be able to force the return address stored
in a stack frame to have precisely the value it must in order
to correctly predict the shape of the next frame on the stack.
The syntax and typing rules for labels and singletons are
shown in Figure 4. If c is the label of a code block of type # ,
#) is the type that contains only instances of c. In
order to make use of values of singleton type, we introduce
Kinds k ::= - | L
Constructors c ::= - | # |
Figure
4: Syntax and Typing for Labels
the coercion blur, which forgets the identity of a singleton
value, yielding a value which is an appropriate operand
to a jump instruction. Since values of singleton type are
code labels, which are usually polymorphic, we have found
it necessary to add a way to apply a label to a constructor
argument while maintaining its singleton type; this is
accomplished by writing v{c}.
The sensitivity of the garbage collector to labels found in
the stack raises another issue that must be addressed in the
typing rule for allocation. In order for the collector to begin
the process of scanning the stack, it must be able to find the
GC table entry for its caller's frame (i.e., the newest program
frame). It is therefore necessary to associate a label
with each allocation site, and require that the first frame
descriptor in the stack descriptor correspond to that label.
(Since this label is intended to denote the return address
of the call to the garbage collector, we must assume that
all such labels in the program are distinct.) We also define
a function retlab of kind DD # SD # L that extracts the
label of the newest frame of the given stack descriptor; making
one final change to the syntax of allocation to include a
label, the final typing rule is as follows:
3.2 Stack Descriptors
The general structure of the kind SD is given in Figure 5,
along with an illustration of the interpretation of a constructor
of this kind into a stack type by interpS . (The validity
checking performed by interpS will be discussed in the next
section.) As the kind definitions show, a stack descriptor is
either "empty"-in which case it carries the label identifying
the return address of the top frame-or it consists of a
frame descriptor and a descriptor for the rest of the stack.
A frame descriptor consists of a label, which identifies
the point in the program that "owns" the frame 1 , the re-
That is, the return address of the currently pending function call
executed by the function instance that created the frame.
kind
kind list - T - Slot list
datakind Base of L | Cons of FD - SD
Figure
5: Structure and Interpretation of Stack Descriptors
turn type of the function whose frame it is, and two lists of
slots. The kind Slot of slots is not defined here; we address
its definition in the next section. A slot describes a single location
on the stack; a constructor of kind Slot must support
(1) interpretation into a type in the fashion indicated by the
arrows in the illustration, and (2) examination to determine
what the specification of this slot in the GC table ought to
be.
The first list of slots in a frame descriptor corresponds
to the slots that come before the return address, the second
list describes the slots after the return address. As shown
in the diagram, interpS builds each frame of the stack type
by interpreting the slots into types, and constructs the return
address using the function's return type as specified
in the frame descriptor, forming a singleton with the label
associated with the next frame. The code for interpS is in
Appendix
B.
In keeping with the usual LX methodology, it is our intention
that LGC programs pass constructors of kind SD
where programs in a GC-ignorant language would pass stack
types. For example, the code of a function that takes two
integers and returns a boolean (such as a comparison func-
might have the type:
Unfortunately, this type does not quite capture the relationship
between the return address (of type code(-;
and the caller's frame (which is hidden
"inside" #). A code block with this type will be unable
to perform any allocation, because its return address does
not have a singleton type. In order to give the return address
a singleton type, we must extract the label from the calling
frame using the function retlab mentioned in Section 3.1.
We then use the following more accurate type in place of
the above:
A more detailed example of the use of stack descriptors
(but with no allocation) is shown in Figure
6. The most interesting part of the function shown
in the figure is the recursive call. If we let
then the return address
of the recursive call to factcode, factreturn{#}, has type
specifies a slot of type int, and define
code(-;int# (interpS SGCT #))->0)
if b then
return address off stack
pop 2 in ; clear away our frame
call blur(ra) ; return
else
push x in ; push argument
push factreturn{#} in ; push return address
call
int# (interpS SGCT (Cons(factframe,#))) ) .
pop 3 in ; clear away our frame
push result in
call blur(ra) ; return
Figure
Using Stack and Frame Descriptors
if we observe that retlab SGCT (Cons(factframe,
factreturn, then the type of the address in the call instruction
is:
To see that the stack type in this code type matches the
current stack at the call site, observe that the first value on
the stack is the return address, whose type we have already
seen to be equal to the one required for the call. The second
value on the stack is the argument to the recursive call,
which has type int. Finally, #1 describes the function's
own frame and the pre-existing stack. In particular,
#r# int#0 , where #r stands for the type of the original
return address and is the unknown
base portion of the stack.
Checking Frame Validity
In addition to enforcing the property that the stack is a sequence
of frames, the condition must
also guarantee that the frames themselves are correctly described
by the GC table. To accomplish this, we ensure
that the equality can only hold if the frame descriptors in
s are consistent with the information about them contained
in SGCT , the GC table's constructor-level representation.
Since the actual GC table is a mapping from return addresses
to frame layout information, it makes sense to structure
SGCT as a mapping from labels to frame layouts as
well.
The basic structure of DD , the kind of SGCT , is given
in
Figure
7. The static GC table is structured as a list of
pairs, each consisting of a label and a constructor of kind
FT , which stands for frame template. A frame template is
essentially an LX constructor representation of the information
in a real GC table entry; it consists of two lists of table
kind
kind list - TSlot list
datakind
| ConsDD of L - FT - DD
Figure
7: Structure of the Static GC Table
slots (constructors of kind TSlot), which correspond to the
two lists of slots in a frame descriptor. Checking a frame
descriptor for validity therefore consists of looking up the
label from the frame descriptor in SGCT and checking each
of the slots in the FD against the table slots in the FT .
We will give definitions for Slot and TSlot and discuss this
consistency checking shortly.
First, however, we must make one final addition to LGC.
In order to be able to write the all-important lookupDD
function that finds the frame template for a given label,
our constructor language must be able to compare labels
for equality. The syntax and semantics of label equality at
the constructor level are given in Figure 8. The constructor
definitionally equal to c3 if the labels c1
and c2 are the same, c4 if they are not the same. Note that
the reduction rules for ifeq only apply when c1 and c2 are
label literals, so the equational theory remains well-behaved.
With these constructs in place, lookupDD is easy to write
using primitive recursion.
4.1 Monomorphic Programs
In this section we will give definitions of Slot and TSlot that
allow "monomorphic" programs to be written in LGC. By
Constructors
Figure
8: Label Equality
datakind KnownSlot of TR
datakind Trace of 1 | NoTrace of 1
True of 1 | False of 1
Figure
9: Monomorphic Slots and Table Slots
"monomorphic" we here mean programs in which all the
values a function places in its stack frame have types that
are known at compile time. 2 If the type of every value in
a function's stack frame has a non-variable type, then it
can be determined statically whether each slot in the frame
contains a pointer that must be traced. More importantly,
the traceability of any slot will be the same for every instance
of the function. Consequently, the GC table only needs one
bit for each slot, and all that needs to be checked at each
allocation site is whether the types of all the slots have the
traceabilities specified in the table.
The definitions for Slot and TSlot are given in Figure 9
along with the kinds of two constructor functions we will
use to check frames. In the case of monomorphic code, a
slot is simply a type representation in the usual style of LX;
a table slot is simply a flag indicating whether a location is
traceable or not. We will not discuss the definition of TR
further, as any representation of types that can be coded
in LX will do for the purposes of this paper. We do, how-
ever, assume the existence of the usual interpretation and
representation functions
as usual for LX, interp turns a type representation into the
type it represents, and R turns a type representation into
the type of the value representing that type. The stack interpretation
function interpS must make use of interp to
translate a slot (which is really a type representation) into
a type; we will use R in the next section, when we cover
polymorphic programs.
The function checkFD checks that a frame descriptor is
valid with respect to the static GC table. First, it must
look up the frame descriptor's label to get the corresponding
frame template if there is one. If there is no frame template
for that label, the descriptor is rejected as invalid. The
2 Note that no nontrivial program in a stack-based language can really
be totally monomorphic, since every function must be parametric
in the stack type so that it can be called at any time.
con
con
Figure
10: Static GC Table for Factorial Example
function Slot2TSlot simply decides whether a given type representation
is traceable; given a frame template, checkFD
applies the Slot2TSlot to each of the slots in the frame descriptor
and uses eqTSlot to determine whether the resulting
TSlot matches the corresponding one in the frame template.
To ensure that the stack can be parsed by the garbage
collector, the interpretation function interpS calls checkFD
on each of the frame descriptors it sees. This portion of the
code of interpS is essentially the following:
(Cons #fd ,
case checkFD SGCT fd of
True => .
| False =>
void# null
In the case where the frame descriptor is not valid with respect
to SGCT , the body of interpS reduces to
void# null,
which is an unsatisfiable stack type since the type void is
uninhabited. If the stack type is interpS SGCT s at some
reachable program point, then obviously interpS SGCT s
must be inhabited. Therefore, reduction of this definition
must not have taken the False branch, so it follows that
all the frame descriptors in s must be valid in the sense of
checkFD .
The static GC table for the factorial example from Figure
6 is shown in Figure 10. Of course, this is a bit unrealistic
since we have shown a "program" with only one function call
site, so as a result there is only one entry in the GC table.
If we were to add anything to the factorial program, such
as a main program body that calls the function factcode,
the GC table and its static representation would have to be
augmented with descriptions of any new call sites we introduced
4.2 Polymorphic Programs
It is a little more di#cult to adapt LGC to certifying polymorphic
programs, because in such programs a function may
have arguments or local variables whose types are di#erent
each time the function is called. The TILT garbage collector
handles such stack locations by requiring that, in any
instance of a polymorphic frame, a value representing the
type of each of these slots is available. The slot in the GC
table corresponding to a location whose type is statically un-
known, rather than directly giving traceability information,
tells the collector where the representation can be found.
TILT allows some flexibility in where the representations
are stored: they can be on the stack, in a heap-allocated
record with a pointer to the record on the stack, or in global
storage. For our purposes, we will assume a simple, flat ar-
rangment in which the type representations for a frame are
all stored in that frame.
The new definitions of Slot and TSlot to account for polymorphism
are shown in Figure 11. We also slightly modify
the definition of FD , the kind of frame descriptors. Any
frame in a polymorphic program will in general be parametric
in some number of "unknown" types; since frame descriptors
must be interpretable to give the type of the stack,
datakind
datakind KnownSlot of TR
| VarSlot of N | RepSlot of N
kind list - T - Slot list - TR list
datakind Trace of 1 | NoTrace of 1
| Var of N | Rep of N
list # N # TR
Figure
11: Frames and Tables for Polymorphic Programs
a frame descriptor represents a single instance of a polymorphic
frame. Therefore, the version of FD for polymorphic
programs includes a list of type representations that "instan-
tiate" the frame descriptor by providing representations of
all the unknown types of values in the frame.
Each individual slot in a frame descriptor may now take
one of three forms: it may be a slot whose type is known
at compile time, as before; or it may be a slot whose type
is one of the unknown types associated with the frame; or
it may be the slot that holds the representation of one of
those types. These three possibilities are reflected in the
new definition of Slot ; in the case of unknown-type and representation
slots, the frame descriptor will carry a natural
number indicating which of the type parameters gives the
type of, or is represented by, the slot. Similarly, there are
now four choices for a slot in the static GC table. A slot
may be known to be traceable; it may be known to be un-
traceable; it may contain a value of variable type; or it may
contain a representation. These four possibilites correspond
to the arms of the new TSlot datakind.
The interpretation of slots into types is now a bit more
complicated as well; for slots of known type the operation is
unchanged, but for variable and representation slots interpS
must look up the appropriate type representation in the list
given by the frame descriptor. Once this representation is
obtained, variable slots are turned into types using interp
as before, while representation slots are turned into types
using the R function described before. We therefore write
the function interpsl , which interprets a single slot given the
list of type representations from the frame descriptor.
fun interpsl trs (KnownSlot
| interpsl trs (VarSlot
(case nth trs n of
SOME tr => interp tr
| NONE => void)
| interpsl trs (RepSlot
(case nth trs n of
SOME tr => R tr
| NONE => void)
Notice that slots specifying invalid indices into the list of
representations are given type void, to ensure that the frame
described by the invalid descriptor cannot occur at run time.
In addition to the possibility of bad indices in variable
and representation slots, there is another new way in which a
frame descriptor may be invalid: the definition of FD allows
a frame to contain a VarSlot for which it does not contain a
corresponding RepSlot . Fortunately, the property that the
set of indices given in VarSlot 's is contained in the set of
indices given in RepSlot 's is easy to check primitive recur-
sively. This responsibility falls to the polymorphic version
of the function checkFD .
An example of a simple polymorphic function in LGC
is shown in Figure 12. The code in this figure defines a
function which, for any type representation #, takes a value
of type interp # and boxes it-that is, allocates and returns
a one-field tuple of type -[interp #] containing that
value. The stack descriptor provided at the allocation site
adds a descriptor for the current frame to the pre-existing
stack descriptor #. This new frame descriptor contains two
slots corresponding to the two values (other than the return
address) that make up the function's stack frame: the
first, RepSlot Zero, describes the run-time representation
of #, which itself has type R#; the second, VarSlot Zero,
describes the argument to the function, which has type
5 Expressiveness
In order to experiment with the expressive power of LGC, we
have implemented a type-checker for the language, including
a "prelude" of constructor and kind definitions giving the
meanings of TR, SD , DD, interpS and so on. We have
also implemented a translation from a GC-ignorant source
language into LGC, demonstrating that LGC is expressive
enough to form the basis of a target of a general-purpose
compiler.
The syntax of the source language is shown in Figure 13.
Its design was driven solely by the goal of removing all explicit
GC-related constructs while enabling a straightforward
translation into LGC. We will briefly mention some
of the issues that shaped the design of the source language,
since they highlight the unusual properties of a language
designed with a garbage collection interface in mind.
Implicit Stack Operations Since the garbage collector requires
the stack to have a certain structure, it would be very
inconvenient to allow the source program unrestricted use of
stack manipulation operations. Therefore, we chose to remove
the stack almost completely from the syntax of the
source language. Source-level functions accept arguments
and return results in the usual manner; the translation to
LGC takes care of turning parameter-passing into stack ma-
nipulation. In addition, since return addresses play such a
critical role in scanning the stack, we cannot allow source
programs to manipulate those either. As a result, the source
language abandons continuation-passing style for a more familiar
return instruction (which we merge with the halt
instruction since their semantics are similar).
Locals Since the source program cannot manipulate the
stack, support for storing intermediate results there must
be built into the language. A somewhat unfortunate consequence
is that all decisions about what will and will not
be stored on the stack must have been made before translation
into LGC. If the design of LGC were to be applied
to a compiler targeting typed assembly language, this would
correspond to the fact that register allocation must be completed
before generation of GC tables can begin. In order
to use stack space for local variables and temporary storage,
each code block in a source program begins with lalloc i,
which indicates that the block wishes to allocate i mutable
local variables on the stack. Special forms of declarations at
the expression level provide access to these locals.
and
pop 3
push cell
call blur(ra)
Figure
12: Polymorphic Allocation Example
Types # | int | bool | ns | code(#e ; #1 , . , #n ) #)
| #1 , . , #n) #1 - #n
Expressions e ::= return v | if v then e1 else e2 | let d in e
Declarations d ::=
Blocks B ::= #xe :#e .#1 , . , #n) #.lalloc(i).e
Programs P ::= letcode
Type Contexts #
Contexts #, x:#
Memory Types #
Figure
13: Syntax of the Source Language
Closures Since LGC requires all code blocks to be closed
and hoisted to the top level, a translation from a higher-level
language in which functions may be nested must perform
closure conversion as part of translation into LGC. Since the
interface of the garbage collector seems to have little impact
on the closure conversion transformation itself, we chose to
keep the source-to-LGC translation simple by assuming closure
conversion had already been performed. Therefore, the
source language also requires all code blocks to be at the
top level. However, we do not include existential types in
the source language, as providing representations for all the
types hidden with existentials would add to the bloat associated
with the translation. Since many of these representations
turn out to be unnecessary, we find it more economical
to introduce existential types as closures at the same time
as the translation to LGC. We include in the source language
special types of closures (#1 , . , #n) #) and
operations for creating them. Every code block expects a
special argument, which is the environment of the closure;
code pointers are made into functions using the closure
operation, which packs a code value together with an environment
6 Conclusion
We have presented a language in whose type system the interface
to a modern high-performance garbage collector can
be expressed. In so doing, we have demonstrated that code
certification is indeed compatible with the use of sophisti-
cated, accurate garbage collection technology. We have described
the interface of one such collector in our language,
and implemented a prototype type-preserving translation
from a GC-ignorant source language into our target language
The alert reader will have noticed the absence of an operational
semantics or safety proof in this paper. An operational
semantics is completely straightforward, except
that the two rules that perform heap allocation must each
have an additional side condition requiring that the stack
be parsable. A type safety proof is boilerplate, based on
the proof for LX by Crary and Weirich [3], except that in
the cases of injection and allocation it must be shown that
the typing conditions on the stack imply that it is parsable.
However, it is not clear how to give a formal definition of
parsability that is any simpler than our specification in Appendix
B, so such a proof would be unenlightening.
The interface of our garbage collector is subtle, and
expressing this interface in a type system requires a fair
amount of programming at the level of type constructors.
Type-checking programs in this language, in turn, involves
deciding equivalences of a lot of large constructors that
are many reduction steps away from normal form. Our
prototype type-checker for LGC decides equivalence using
a straightforward, recursive weak-head-normalize and compare
algorithm, and while our implementation is not yet serious
enough to reach any conclusions about e#ciency, preliminary
results indicate the amount of work involved is not
unreasonably large.
This paper has examined a garbage collector interface
based on the one used by the TILT/ML compiler, but considerably
simpler. However, we believe that what we have
described is su#cient to handle most of the issues that arise
in a real collector. For instance, it does not appear di#-
cult to account for registers (which TILT treats essentially
the same as stack slots) or global variables (whose types are
fixed).
Our proof-of-concept implementation does not address
the possibility of translating higher-order polymorphism
into LGC. Higher-order polymorphism arises in the setting
of compiling the full ML language, in which abstract parameterized
types can occur. TILT is able to use a similar
GC table format to the one we have described, even for
higher-order polymorphic programs; it performs a program
transformation called reification to introduce variable bindings
for any types of registers or stack locations that are
unknown at compile time. We believe that by performing
something similar to reification we can translate programs
with higher-order polymorphism into LGC, but this remains
a topic for future work.
--R
Scalable Real-Time Parallel Garbage Collection for Symmetric Multiprocessors
A parallel
Flexible type anal- ysis
Principled scavenging.
From System F to typed assembly language.
Safe, untrusted agents using proof-carrying code
Managing Memory With Types.
--TR
Proof-carrying code
Flexible type analysis
From system F to typed assembly language
Type-preserving garbage collectors
Principled scavenging
A parallel, real-time garbage collector
Managing memory with types
--CTR
Feng , Zhong Shao , Alexander Vaynberg , Sen Xiang , Zhaozhong Ni, Modular verification of assembly code with stack-based control abstractions, ACM SIGPLAN Notices, v.41 n.6, June 2006
Andrew McCreight , Zhong Shao , Chunxiao Lin , Long Li, A general framework for certifying garbage collectors and their mutators, ACM SIGPLAN Notices, v.42 n.6, June 2007 | typed compilation;garbage collection;certified code;type systems |
604198 | Context-specific sign-propagation in qualitative probabilistic networks. | Qualitative probabilistic networks are qualitative abstractions of probabilistic networks, summarising probabilistic influences by qualitative signs. As qualitative networks model influences at the level of variables, knowledge about probabilistic influences that hold only for specific values cannot be expressed. The results computed from a qualitative network, as a consequence, can be weaker than strictly necessary and may in fact be rather uninformative. We extend the basic formalism of qualitative probabilistic networks by providing for the inclusion of context-specific information about influences and show that exploiting this information upon reasoning has the ability to forestall unnecessarily weak results. | Introduction
Qualitative probabilistic networks are qualitative abstractions
of probabilistic networks [Wellman, 1990] , introduced for
probabilistic reasoning in a qualitative way. A qualitative
probabilistic network encodes statistical variables and the
probabilistic relationships between them in a directed acyclic
graph. Each node A in this digraph represents a variable. An
a probabilistic influence of the variable
A on the probability distribution of the variable B; the
influence is summarised by a qualitative sign indicating the
direction of shift in B's distribution. For probabilistic inference
with a qualitative network, an efficient algorithm, based
upon the idea of propagating and combining signs, is available
[Druzdzel & Henrion, 1993 ] .
Qualitative probabilistic networks can play an important
role in the construction of probabilistic networks for real-life
application domains. While constructing the digraph of a
probabilistic network is doable, the assessment of all probabilities
required is a much harder task and is only performed
when the network's digraph is considered robust. By eliciting
signs from domain experts, the obtained qualitative probabilistic
network can be used to study and validate the reasoning
behaviour of the network prior to probability assessment;
the signs can further be used as constraints on the probabilities
to be assessed [Druzdzel & Van der Gaag, 1995 ] . To
be able to thus exploit a qualitative probabilistic network, it
This work was partly funded by the EPSRC under grant
should capture as much qualitative information from the application
domain as possible. In this paper, we propose an
extension to the basic formalism of qualitative networks to
enhance its expressive power for this purpose.
Probabilistic networks provide, by means of their digraph,
for a qualitative representation of the conditional independences
that are embedded in a joint probability distribu-
tion. The digraph in essence captures independences between
nodes, that is, it models independences that hold for all values
of the associated variables. The independences that hold
only for specific values are not represented in the digraph but
are captured instead by the conditional probabilities associated
with the nodes in the network. Knowledge of these latter
independences allows further decomposition of conditional
probabilities and can be exploited to speed up inference. For
this purpose, a notion of context-specific independence was
introduced for probabilistic networks to explicitly capture independences
that hold only for specific values of variables
[Boutilier et al., 1996; Zhang & Poole, 1999 ] .
A qualitative probabilistic network equally captures independences
between variables by means of its digraph. Since
its qualitative influences pertain to variables as well, independences
that hold only for specific values of the variables
involved cannot be represented. In fact, qualitative influences
implicitly hide such context-specific independences: if the
influence of a variable A on a variable B is positive in one
context, that is, for one combination of values for some other
variables, and zero in all other contexts - indicating independence
- then the influence is captured by a positive sign. Also,
positive and negative influences may be hidden: if a variable
A has a positive influence on a variable B in some context and
a negative influence in another context, then the influence of
A on B is modelled as being ambiguous.
As context-specific independences basically are qualitative
by nature, we feel that they can and should be captured explicitly
in a qualitative probabilistic network. For this purpose,
we introduce a notion of context-specific sign. We extend
the basic formalism of qualitative networks by providing for
the inclusion of context-specific information about influences
and show that exploiting this information upon inference can
prevent unnecessarily weak results. The paper is organised
as follows. In Section 2, we provide some preliminaries concerning
qualitative probabilistic networks. We present two
examples of the type of information that can be hidden in
qualitative influences, in Section 3. We present our extended
formalism and associated algorithm for exploiting context-specific
information in Section 4. In Section 5, we discuss
the context-specific information that is hidden in the qualitative
abstractions of two real-life probabilistic networks. In
Section 6, we briefly show that context-specific information
can also be incorporated in qualitative probabilistic networks
that include a qualitative notion of strength of influences. The
paper ends with some concluding observations in Section 7.
Qualitative probabilistic networks
A qualitative probabilistic network models statistical variables
as nodes in its digraph; from now on, we use the terms
variable and node interchangeably. We assume, without loss
of generality, that all variables are binary, using a and a to indicate
the values true and false for variable A, respectively. A
qualitative network further associates with its digraph a set of
qualitative influences, describing probabilistic relationships
between the variables [Wellman, 1990] . A qualitative influence
associated with an arc A ! B expresses how the values
of node A influence the probabilities of the values of node B.
A positive qualitative influence, for example, of A on B, denoted
that observing higher values for
node A makes higher values for node B more likely, regardless
of any other influences on B, that is,
for any combination of values x for the set X of parents of B
other than A. The '+' in S + (A; B) is termed the influence's
sign. A negative qualitative influence S , and a zero qualitative
influence S 0 , are defined analogously. If the influence
of node A on node B is non-monotonic or unknown, we say
that it is ambiguous, denoted S ? (A; B).
The set of influences of a qualitative probabilistic network
exhibits various properties [Wellman, 1990] . The symmetry
property states that, if S - (A; B), then also S - (B; A),
?g. The transitivity property asserts that a sequence
of qualitative influences along a chain that specifies
at most one incoming arc per node, combine into a single influence
with
the
-operator from Table 1. The composition
property asserts that multiple influences between two nodes
along parallel chains combine into a single influence with the
-operator.
Table
1:
The
- and -operators.
A qualitative network further captures qualitative synergies
between three or more nodes; for details we refer to [Druzdzel
For inference with a qualitative network, an efficient algorithm
is available [Druzdzel & Henrion, 1993 ] . The basic
idea of the algorithm is to trace the effect of observing a
node's value on the other nodes in the network by message
passing between neighbouring nodes. For each node, a node
sign is determined, indicating the direction of change in the
node's probability distribution occasioned by the new observation
given all previously observed node values. Initially, all
node signs equal '0'. For the newly observed node, an appropriate
sign is entered, that is, either a '+' for the observed
value true or a ' ' for the value false. Each node receiving a
message updates its node sign and subsequently sends a message
to each neighbour whose sign needs updating. The sign
of this message is
the
-product of the node's (new) sign and
the sign of the influence it traverses. This process is repeated
throughout the network, building on the properties of sym-
metry, transitivity, and composition of influences. Since each
node can change its sign at most twice, once from '0' to `+'
or ' ', and then only to `?', the process visits each node at
most twice and is therefore guaranteed to halt.
3 Context-independent signs
Context-specific information cannot be represented explicitly
in a qualitative probabilistic network, but is hidden in the net-
work's qualitative influences. If, for example, the influence of
a node A on a node B is positive for one combination of values
for the set X of B's parents other than A, and zero for all
other combinations of values for X , then the influence of A
on B is positive by definition. The zero influences are hidden
due to the fact that the inequality in the definition of qualitative
influence is not strict. We present an example illustrating
such hidden zeroes.
R P
Figure
1: The qualitative surgery network.
Example 1 The qualitative network from Figure 1 represents
a highly simplified fragment of knowledge in oncology; it
pertains to the effects and complications to be expected from
treatment of oesophageal cancer. Node L models the life expectancy
of a patient after therapy; the value l indicates that
the patient will survive for at least one year. Node T models
the therapy instilled; we consider surgery, modelled by t, and
no treatment, modelled by t, as the only alternatives. The effect
to be attained from surgery is a radical resection of the
oesophageal tumour, modelled by node R. After surgery a
life-threatening pulmonary complication, modelled by node
may result; the occurrence of this complication is heavily
influenced by whether or not the patient is a smoker, modelled
by node S.
We consider the conditional probabilities from a quantified
network representing the same knowledge. We would like to
note that these probabilities serve illustrative purposes
although not entirely unrealistic, they have not been specified
by domain experts. The probability of attaining a radical resection
upon surgery is Pr(r
there can be no radical resection, we have Pr(r j t
From these probabilities we have that node T indeed exerts
a positive qualitative influence on node R. The probabilities
of a pulmonary complication occurring and of a patient's life
expectancy after therapy are, respectively,
From the left table, we verify that both T and S exert a positive
qualitative influence on node P . The fact that the influence
of T on P is actually zero in the context of the value s for
node S, is not apparent from the influence's sign. Note that
this zero influence does not arise from the probabilities being
zero, but rather from their having the same value. From the
right table we verify that node R exerts a positive influence
on node L; the qualitative influence of P on L is negative.
The previous example shows that the level of representation
detail of a qualitative network can result in information hid-
ing. As a consequence, unnecessarily weak answers may result
upon inference. For example, from the probabilities involved
we know that performing surgery on a non-smoker has
a positive influence on life expectancy. Due to the conflicting
reasoning chains from T to L in the qualitative network, how-
ever, entering the observation t for node T will result in a '?'
for node L, indicating that the influence is unknown.
We recall from the definition of qualitative influence that
the sign of an influence of a node A on a node B is independent
of the values for the set X of parents of B other than
A. A '?' for the influence of A on B may therefore hide the
information that node A has a positive influence on node B
for some combination of values of X and a negative influence
for another combination. If so, the ambiguous influence
is non-monotonic in nature and can in fact be looked upon as
specifying different signs for different contexts. We present
an example to illustrate this observation.
Figure
2: The qualitative cervical metastases network.
Example 2 The qualitative network from Figure 2 represents
another fragment of knowledge in oncology; it pertains to the
metastasis of oesophageal cancer. Node L represents the location
of the primary tumour that is known to be present in a
patient's oesophagus; the value l models that the tumour resides
in the lower two-third of the oesophagus and the value
l expresses that the tumour is in the oesophagus' upper one-
third. An oesophageal tumour upon growth typically gives
rise to lymphatic metastases, the extent of which are captured
by node M . The value
of M indicates that just the local
and regional lymph nodes are affected; m denotes that distant
lymph nodes are affected. Which lymph nodes are local or
regional and which are distant depends on the location of the
tumour in the oesophagus. The lymph nodes in the neck, or
cervix, for example, are regional for a tumour in the upper
one-third of the oesophagus and distant otherwise. Node C
represents the presence or absence of metastases in the cervical
lymph nodes.
We consider the conditional probabilities from a quantified
network representing the same knowledge; once again, these
probabilities serve illustrative purposes only. The probabilities
of the presence of cervical metastases in a patient are
Pr(c) l l
From these probabilities we have that node L indeed has a
negative influence on node C. The influence of node M on
C, however, is non-monotonic:
The non-monotonic influence hides a '+' for the value l of
node L and a ' ' for the context l.
From the two examples above, we observe that context-specific
information about influences that is present in the
conditional probabilities of a quantified network cannot be
represented explicitly in a qualitative probabilistic network:
upon abstracting the quantified network to the qualitative net-
work, the information is effectively hidden.
4 Context-specificity and its exploitation
The level of representation detail of a qualitative probabilistic
network enforces influences to be independent of specific
contexts. In this section we present an extension to the basic
formalism of qualitative networks that allows for associating
context-specific signs with qualitative influences. In Section
4.1, the extended formalism is introduced; in Section 4.2, we
show, by means of the example networks from the previous
section, that exploiting context-specific information can prevent
unnecessarily weak results upon inference.
4.1 Context-specific signs
Before introducing context-specific signs, we define a notion
of context for qualitative networks. Let X be a set of nodes,
called the context nodes. A context c X for X is a combination
of values for a subset Y X of the set of context nodes.
we say that the context is empty, denoted ;
we say that the context is maximal. The set of
all possible contexts for X is called the context set for X and
is denoted CX . To compare different contexts for the same
set of context nodes X , we use an ordering on contexts: for
any two combinations of values c X and c 0
respectively, we say that
c X and c 0
X specify the same combination of values for Y 0 .
A context-specific sign now basically is a sign that may
vary from context to context. It is defined as a function
?g from a context set CX to the set
of basic signs, such that for any two contexts c X and c 0
with c X > c 0
X we have that, if -(c 0
0g. For abbreviation, we will
write -(X) to denote the context-specific sign - that is defined
on the context set CX . Note that the basic signs from regular
qualitative networks can be looked upon as context-specific
signs that are defined by a constant function.
In our extended formalism of qualitative networks, we assign
context-specific signs to influences. We say that a node
A exerts a qualitative influence of sign -(X) on a node B, denoted
is the set of parents of B other
than A, iff for each context c X for X we have that
combination of values c X y for X;
such combination of values c X
such combination of values c X
Note that we take the set of parents of node B other than A
for the set of context nodes; the definition is readily extended
to apply to arbitrary sets of context nodes, however. Context-specific
qualitative synergies can be defined analogously.
A context-specific sign -(X) in essence has to specify a
basic sign from f+; ; 0; ?g for each possible combination
of values in the context set CX . From the definition of -(X),
however, we have that it is not necessary to explicitly indicate
a basic sign for every such context. For example, consider an
influence of a node A on a node B with the set of context
nodes Eg. Suppose that the sign -(X) of the influence
is defined as
The function -(X) is uniquely described by the signs of the
smaller contexts whenever the larger contexts are assigned the
same sign. The function is therefore fully specified by
The sign-propagation algorithm for probabilistic inference
with a qualitative network, as discussed in Section 2, is easily
extended to handle context-specific signs. The extended algorithm
propagates and combines basic signs only. Before a
sign is propagated over an influence, it is investigated whether
or not the influence's sign is context-specific. If so, the currently
valid context is determined from the available observations
and the basic sign specified for this context is propa-
gated; if none of the context nodes have been observed, then
the sign specified for the empty context is propagated.
4.2 Exploiting context-specific signs
In Section 3 we presented two examples showing that the
influences of a qualitative probabilistic network can hide
context-specific information. Revealing this hidden information
and exploiting it upon inference can be worthwhile.
The information that an influence is zero for a certain context
can be used, for example, to improve the runtime of the
sign-propagation algorithm because propagation of a sign can
be stopped as soon as a zero influence is encountered. More
importantly, however, exploiting the information can prevent
conflicting influences arising during inference. We illustrate
this observation by means of an example.
Example 3 We reconsider the qualitative surgery network
from
Figure
1. Suppose that a non-smoker is undergoing
surgery. In the context of the observation s for node S, propagating
the observation t for node T with the basic sign-
propagation algorithm results in the sign '?' for node L: there
is not enough information present in the network to compute
a non-ambiguous sign from the two conflicting reasoning
chains from T to L.
We now extend the qualitative surgery network by assigning
the context-specific sign -(S), defined by
to the influence of node T on node P , that is, we explicitly
include the information that non-smoking patients are not
at risk for pulmonary complications after surgery. The thus
extended network is shown in Figure 3(a). We now reconsider
our non-smoking patient undergoing surgery. Propagating
the observation t for node T with the extended sign-
propagation algorithm in the context of
s results in the sign
(0
L: we find that surgery
is likely to increase life expectancy for the patient.
R P
(a)
(b)
Figure
3: A hidden zero revealed, (a), and a non-monotonicity
captured, (b), by a context-specific sign.
In Section 3 we not only discussed hidden zero influ-
ences, but also argued that positive and negative influences
can be hidden in non-monotonic influences. As the initial
'?'s of these influences tend to spread to major parts of
a network upon inference, it is worthwhile to resolve the
non-monotonicities involved whenever possible. Our extended
formalism of qualitative networks provides for effectively
capturing information about non-monotonicities, as is
demonstrated by the following example.
Example 4 We reconsider the qualitative cervical metastases
network from Figure 2. We recall that the influence
of node M on node C is non-monotonic since
ml) and
In the context l, therefore, the influence is positive, while it is
negative in the context l. In the extended network, shown in
Figure
3(b), this information is captured explicitly by assigning
the sign -(L), defined by
to the influence of node M on node C.
5 Context-specificity in real-life networks
To get an impression of the context-specific information that
is hidden in real-life qualitative probabilistic networks, we
# influences with sign -:
ALARM
oesophagus
Table
2: The numbers of influences with '+', ` ', '0' and `?'
signs for the qualitative ALARM and oesophagus networks.
computed qualitative abstractions of the well-known ALARM-
network and of the network for oesophageal cancer. The
ALARM-network consists of 37, mostly non-binary, nodes
and 46 arcs; the number of direct qualitative influences in
the abstracted network - using the basic definition of qualitative
influence - therefore equals 46. The oesophagus network
consists of 42, also mostly non-binary, nodes and 59 arcs.
Table
summarises for the two abstracted networks the numbers
of direct influences with the four different basic signs.
The numbers reported in Table 2 pertain to the basic signs
of the qualitative influences associated with the arcs in the
networks' digraphs. Each such influence, and hence each associated
basic sign, covers a number of maximal contexts.
For a qualitative influence associated with the arc A ! B,
the number of maximal contexts equals 1 (the empty context)
node B has no other parents than A; otherwise, the number
of maximal contexts equals the number of possible combinations
of values for the set of parents of B other than A.
For every maximal context, we computed the proper (context-
specific) sign from the original quantified network. Table 3
summarises the number of context-specific signs covered by
the different basic signs in the two abstracted networks. From
the table we have, for example, that the 17 qualitative influences
with sign '+' from the ALARM network together cover
different maximal contexts. For 38 of these contexts, the
influences are indeed positive, but for 21 of them the influences
are actually zero.
# cX with sign
total 72 64 44 28 218
# cX with sign
total
Table
3: The numbers of contexts c X covered by the '+', ` ',
'0' and `?' signs and their associated context-specific signs,
for the qualitative ALARM and oesophagus networks.
For the qualitative ALARM-network, we find that 35% of
the influences are positive, 17% are negative, and 48% are
ambiguous; the network does not include any explicitly specified
zero influences. For the extended network, using context-specific
signs, we find that 32% of the qualitative influences
are positive, 31% are negative, 20% are zero, and 17% remain
ambiguous. For the qualitative oesophagus network, we
find that 54% of the influences are positive, 21% are nega-
tive, and 25% are ambiguous; the network does not include
any explicit zero influences. For the extended network, using
context-specific signs, we find that 46% of the qualitative
influences are positive, 22% are negative, 10% are zero, and
22% remain ambiguous.
We observe that for both the ALARM and the oesophagus
network, the use of context-specific signs serves to reveal a
considerable number of zero influences and to substantially
decrease the number of ambiguous influences. Similar observations
were made for qualitative abstractions of two other
real-life probabilistic networks, pertaining to Wilson's disease
and to ventricular septal defect, respectively. We conclude
that by providing for the inclusion of context-specific
information about influences, we have effectively extended
the expressive power of qualitative probabilistic networks.
6 Extension to enhanced networks
The formalism of enhanced qualitative probabilistic networks
introduces a qualitative
notion of strength of influences into qualitative networks.
We briefly argue that the notions from the previous sections
can also be used to provide for the inclusion and exploitation
of context-specific information about such strengths.
In an enhanced qualitative network, a distinction is made
between strong and weak influences by partitioning the set of
all influences into two disjoint subsets in such a way that any
influence from the one subset is stronger than any influence
from the other subset; to this end a cut-off value is used. For
example, a strongly positive qualitative influence of a node A
on a node B, denoted S ++ (A; B), expresses that
for any combination of values x for the set X of parents of B
other than A; a weakly positive qualitative influence of A on
B, denoted S
for any such combination of values x. The sign '+ ? ' is used
to indicate a positive influence whose relative strength is am-
biguous. Strongly negative qualitative influences S , and
weakly negative qualitative influences S , are defined anal-
a negative influence whose relative strength is ambiguous
is denoted S ? . Zero qualitative influences and ambiguous
qualitative influences are defined as in regular qualitative
probabilistic networks. Renooij &Van der Gaag (1999)
also provide extended definitions for the -
and
-operators
to apply to the double signs. These definitions cannot be reviewed
without detailing the enhanced formalism, which is
beyond the scope of the present paper; it suffices to say that
the result of combining signs is basically as one would intuitively
expect.
Our notion of context-specific sign can be easily incorporated
into enhanced qualitative probabilistic networks. A
context-specific sign now is defined as a function
?g from a context set CX to the
extended set of basic signs, such that for any two contexts
c X and c 0
X we have that, if the sign is strongly
positive for c 0
must be strongly positive for c X , if the
sign is weakly positive for c 0
must be either weakly
positive or zero for c X , and if it is ambiguously positive for
may be (strongly, weakly or ambiguously) pos-
itive, or zero for c X . Similar restrictions hold for negative
signs. Context-specific signs are once again assigned to in-
fluences, as before.
For distinguishing between strong and weak qualitative influences
in an enhanced network, a cut-off value has to
be chosen in such a way that, basically, for all strong influences
of a node A on a node B we have that j Pr(b j
contexts x, and for all weak
influences we have that j Pr(b j ax) Pr(b j ax)j for
all such contexts. If, for a specific cut-off value , there exists
an influence of node A on node B for which there are
contexts x and x 0 with
ax)j > and
signs of ambiguous
strength would be introduced into the enhanced network,
which would seriously hamper the usefulness of exploiting a
notion of strength. A different cut-off value had better be cho-
sen, by shifting towards 0 or 1. Unfortunately, may then
very well end up being 0 or 1. The use of context-specific
information about qualitative strengths can now forestall the
necessity of shifting the cut-off value, as is illustrated in the
following example.
R P
Figure
4: Context-specific sign in an enhanced network.
Example 5 We reconsider the surgery network and its associated
probabilities from Example 1. Upon abstracting the
network to an enhanced qualitative network, we distinguish
between strong and weak influences by choosing a cut-off
value of, for example, We then have that a pulmonary
complication after surgery strongly influences life ex-
pectancy, that is, S (P; L). For this cut-off value, however,
the influence of node T on node P is neither strongly positive
nor weakly positive; the value therefore does not
serve to partition the set of influences in two distinct subsets.
To ensure that all influences in the network are either strong
or weak, the cut-off value should be either 0 or 1.
For the influence of node T on node P , we observe that, for
0:46, the influence is strongly positive for the value s of
node S and zero for the context
s. By assigning the context-specific
sign -(S) defined by
to the influence of node T on node P , we explicitly specify
the otherwise hidden strong and zero influences. The thus
extended network is shown in Figure 4. We recall from Example
3 that for non-smokers the effect of surgery on life expectancy
is positive. For smokers, however, the effect could
not be unambiguously determined. From the extended net-work
in Figure 4, we now find the effect of surgery on life
expectancy for smokers to be negative: upon propagating the
observation t for node T in the context of the information s
for node S, the sign
results
for node L.
Conclusions
We extended the formalism of qualitative probabilistic networks
with a notion of context-specificity. By doing so,
we enhanced the expressive power of qualitative networks.
While in a regular qualitative network, zero influences as well
as positive and negative influences can be hidden, in a net-work
extended with context-specific signs this information is
made explicit. Qualitative abstractions of some real-life probabilistic
networks have shown that networks indeed can incorporate
considerable context-specific information. We further
showed that incorporating the context-specific signs into enhanced
qualitative probabilistic networks that include a qualitative
notion of strength renders even more expressive power.
The fact that zeroes and double signs can be specified context-
specifically allows them to be specified more often, in gen-
eral. We showed that exploiting context-specific information
about influences and about qualitative strengths can prevent
unnecessary ambiguous node signs arising during inference,
thereby effectively forestalling unnecessarily weak results.
--R
Efficient reasoning in qualitative probabilistic networks.
Elicitation of probabilities for belief net- works: combining qualitative and quantitative informa- tion
Enhancing QPNs for trade-off resolution
Fundamental concepts of qualitative probabilistic networks.
On the role of context-specific independence in probabilistic inference
--TR
Probabilistic reasoning in intelligent systems: networks of plausible inference
The computational complexity of probabilistic inference using Bayesian belief networks (research note)
Fundamental concepts of qualitative probabilistic networks
Building Probabilistic Networks
On the Role of Context-Specific Independence in Probabilistic Inference
Pivotal Pruning of Trade-offs in QPNs
Qualtitative propagation and scenario-based scheme for exploiting probabilistic reasoning
--CTR
Jeroen Keppens, Towards qualitative approaches to Bayesian evidential reasoning, Proceedings of the 11th international conference on Artificial intelligence and law, June 04-08, 2007, Stanford, California | context-specific independence;qualitative reasoning;probabilistic reasoning;non-monotonicity |
604208 | Accelerating filtering techniques for numeric CSPs. | Search algorithms for solving Numeric CSPs (Constraint Satisfaction Problems) make an extensive use of filtering techniques. In this paper we show how those filtering techniques can be accelerated by discovering and exploiting some regularities during the filtering process. Two kinds of regularities are discussed, cyclic phenomena in the propagation queue and numeric regularities of the domains of the variables. We also present in this paper an attempt to unify numeric CSPs solving methods from two distinct communities, that of CSP in artificial intelligence, and that of interval analysis. | Introduction
In several fields of human activity, like engineering, science or business, people are
able to express their problems as constraint problems. The CSP (Constraint Satisfaction
Problem) schema is an abstract framework to study algorithms for solving such constraint
problems. A CSP is defined by a set of variables, each with an associated domain of
possible values and a set of constraints on the variables. This paper deals more specifically
with CSPs where the constraints are numeric nonlinear relations and where the domains
are continuous domains (numeric CSPs).
* Corresponding author.
E-mail addresses: ylebbah@univ-oran.dz (Y. Lebbah), olhomme@ilog.fr (O. Lhomme).
This paper is an extended version of [31].
see front matter 2002 Elsevier Science B.V. All rights reserved.
In general, numeric CSPs cannot be tackled with computer algebra systems: there is no
algorithm for general nonlinear constraint systems. And most numeric algorithms cannot
guarantee completeness: some solutions may be missed, a global optimum may never be
found, and, sometimes a numeric algorithm even does not converge at all. The only numeric
algorithms that can guarantee completeness-even when floating-point computations are
used-are coming either from the interval analysis community or from the AI community
(CSP). Unfortunately, those safe constraint-solving algorithms are often less efficient than
non-safe numeric methods, and the challenge is to improve their efficiency.
The safe constraint-solving algorithms are typically a search-tree exploration where a
filtering technique is applied at each node. Improvement in efficiency is possible by finding
the best compromise between a filtering technique that achieves a strong pruning at a high
computational cost and another one that achieves less pruning at a lower computational
cost. And thus, a lot of filtering techniques have been developed. Some filtering techniques
take their roots from numerical analysis: the main filtering technique used in interval
analysis [37] is an interval variation of Newton iterations. (See [24,28] for an overview
of such methods.) Other filtering techniques originate from artificial intelligence: the basic
filtering technique is a kind of arc-consistency filtering [36] adapted to numeric CSPs [17,
26,32]. Higher-order consistencies similar to k-consistency [21] have also been defined
for numeric CSPs [25,32]. Another technique from artificial intelligence [19,20] is to
merge the constraints concerning the same variables, giving one "total" constraint (thanks
to numerical analysis techniques) and to perform arc-consistency on the total constraints.
Finally, [6,45] aim at expressing interval analysis pruning as partial consistencies, bridging
the gap between the two families of filtering techniques.
All the above works address the issue of finding a new partial consistency property that
can be computed by an associated filtering algorithm with a good efficiency (with respect
to the domain reductions performed). Another direction, in the search of efficient safe
algorithms, is to try to optimize the computation of already existing consistency techniques.
Indeed, the aim of this paper is to study general methods for accelerating consistency
techniques. The main idea is to identify some kinds of regularity in the dynamic behavior
of a filtering algorithm, and then to exploit those regularities. A first kind of regularities
we exploit is the existence of cyclic phenomena in the propagation queue of a filtering
algorithm. A second kind of regularities is a numeric regularity: when the filtering process
converges asymptotically, its fixed point often can be extrapolated. As we will see in the
paper, such ideas, although quite general, may lead to drastic improvements in efficiency
for solving numeric CSPs. The paper focus on numeric continuous problems, but the ideas
are more general and may be of interest also for mixed discrete and continuous problems,
or even for pure discrete problems.
The paper is organized in two main parts. The first part (Section 2) presents an
overview of numeric CSPs; artificial intelligence works and interval analysis works
are presented through a unifying framework. The second part consists of the next two
sections, and presents the contribution of the paper. Section 3 introduces the concept of
reliable transformation, and presents two reliable transformations that exploit two kinds
of regularities occurring during the filtering process: cyclic phenomena in the propagation
queue and numeric regularities of the domains of the variables. Section 4 discusses related
works.
2. Numeric CSPs
This section presents numeric CSPs in a slightly non-standard form, which will be
convenient for our purposes, and will unify works from interval analysis and constraint
satisfaction communities.
A numeric CSP is a triplet #X , D,C# where:
. X is a set of n variables x 1 , . , x n .
. denotes a vector of domains. The ith component of D, D i , is the
domain containing all acceptable values for x i .
. denotes a set of numeric constraints.
denotes the variables
appearing in C j .
This paper focuses on CSPs where the domains are intervals: D #
{[a,
The following notation is used throughout the paper. An interval [a,
b] such that a > b
is an empty interval. A vector of domains D such that a component D i is an empty
interval will be denoted by #. The lower bound, the upper bound and the midpoint of an
interval D i (respectively interval vector D) are respectively denoted by D i , D i , and
(respectively D, D, and
m( D)). The lower bound, the upper bound, the midpoint, the
inclusion relation, the union operator and the intersection operator are defined over interval
vectors; they have to be interpreted componentwise. For instance D means #D 1 , . , D n #;
D #D means D # i #D i for all i # 1, . , n; D #D # means #D #
A k-ary constraint C
k-ary relation over the real numbers, that
is, a subset of R k .
2.1. Approximation of projection functions
The algorithms used over numeric CSPs typically work by narrowing domains and
need to compute the projection-denoted #C j ,x
i( D) or also #
j,i( D)-of a constraint
over the variable x i in the space delimited by D . The
projection #
j,i( D) is defined as follows.
. If x i /
. If x i #
the projection is defined by the set of all elements
we can find elements for the k - 1 remaining variables of
# . (1)
Usually, such a projection cannot be computed exactly due to several reasons, such as:
(1) the machine numbers are floating point numbers and not real numbers so round-off
errors occur; (2) the projection may not be representable as floating-point numbers; (3) the
computations needed to have a close approximation of the projection of only one given
constraint may be very expensive; (4) the projection may be discontinuous whereas it is
much easier to handle only closed intervals for the domains of the variables.
Thus, what is usually done is that the projection of a constraint over a variable is
approximated. Let #C j ,x
i( D) or also #
j,i( D) denote such an approximation. In order to
guarantee that all solutions of a numeric CSP can be found, a solving algorithm that uses
D) needs that #
D) includes the exact projection. We will also assume in the rest
of the paper that #
j,i( D) satisfies a contractance property. Thus we have:
D) hides all the problems seen above. In particular, it allows us not to go into the
details of the relationships between floating point and real numbers (see for example [2]
for those relationships) and to consider only real numbers. It only remains to build such a
# j,i . Interval analysis [37] makes it possible.
2.1.1. Interval arithmetic
Interval arithmetic [37], on which interval analysis is built, is an extension of real
arithmetic. It defines the arithmetic functions {+,-,#,/} over the intervals with simple
set extension semantics.
Notation. To present interval arithmetic, we will use the following convention to help the
reading: x, y will denote real variables or vectors of real variables and X,Y will denote
interval variables or vectors of interval variables. Distinction between a scalar variable and
a vector of variables will be clear from the context.
With this notation, an arithmetic function #,/} over the intervals is defined
by:
Thanks to the monotonicity property of arithmetic operators # , X#Y can be computed
by considering the bounds of the intervals only. Let X,Y #
X,X] , and
Y] , the arithmetic operators are computed on intervals as follows:
# Y.
2.1.2. Interval extension of a real function
For an arbitrary function over the real numbers, it is not possible in general to compute
the exact enclosure of the range of the function [29]. The concept of interval extension
has been introduced by Moore: the interval extension of a function is an interval function
that computes outer approximations on the range of the function over a domain. Different
interval extensions exist. Let f be a function over the real numbers defined over the
variables x 1 , . , x n , the following interval extensions are frequently used:
the natural interval extension of a real function f is defined by replacing each
real operator by its interval counterpart. It is easy to see that
contains
the range of f , and is thus an interval extension.
Example 1 (Natural extension of x 2
The natural extension of x 2
2.
The natural extension of x 2
the Taylor interval extension of a real function f , over the interval vector X,
is defined by the natural extension of a first-order Taylor development of f [42]:
f
nat #f
# . (2)
The intuition why
is an interval extension is given in a footnote. 2
Example 2 (Taylor extension of x - x 2 ). Let
The Taylor extension of
The Taylor extension gives generally a better enclosure than the natural extension on
small intervals. 3 Nevertheless, in general neither
give the exact range of
f . For example, let
2] , we have:
[-1,3] ,
whereas the range of f over X= [0,
is [3/4,
3] .
2 The Taylor interval extension comes from a direct application of the mean value theorem: Let f be a real
function defined over [a,
be continuous and with a continuous derivative over [a,
be two
points in [a,
. Then, there exists # between x 1 and x 2 such that
#)
# is unknown, but what can be done is to replace it by an interval that contains it, and to evaluate the natural
extension of the resulting expression. Thus we know that
#( [a,
As this is true
for every x 1 and x 2 in [a,
, we can replace x 1 by the midpoint of [a,
by an interval that contains it.
This leads to
#( [a,
#( [a,
(2) is the generalization for vectors of the above result.
3 The Taylor extension has a quadratic convergence, whereas the natural extension has a linear convergence;
see for example [42].
2.1.3. Solution function of a constraint
To compute the projection #
j,i( D) of the constraint C j on the variable x i , we need to
introduce the concept of solution function that expresses the variable x i in terms of the
other variables of the constraint. For example, for the constraint x z, the solution
functions are: y, f
Assume a solution function is known that expresses the variable x i in terms of the other
variables of the constraint. Thus an approximation of the projection of the constraint over
x i given a domain D can be computed thanks to any interval extension of this solution
function. Thus we have a way to compute #
D).
Nevertheless, for complex constraints, there may not exist such an analytic solution
for example, consider x
log( The interest of numeric methods as
presented in this paper is precisely for those constraints that cannot be solved algebraically.
Three main approaches have been proposed:
. The first one exploits the fact that analytic functions always exist when the variable to
express in terms of the others appears only one time in the constraint. This approach
simply considers that each occurrence of a variable is a different new variable. In
the previous example this would give:
log( x( That way, it is trivial to
compute a solution function: it suffices to know the inverse of basic operators. In our
example, we obtain f
log( x( 2) ) and f
An approximation of the projection of the constraint over x i can be computed by
intersecting the natural interval extensions of the solution functions for all occurrences
of x i in C j . For the last example, we could take #
log( X)#exp -X .
Projection functions obtained by this way will be called # nat in this paper.
. The second idea uses the Taylor extension to transform the constraint into an interval
linear constraint. The nonlinear equation
nat #f
m( X). Now consider that the derivatives are evaluated over a box D that
contains X. D is considered as constant, and let c
D). The equation becomes:
nat #f
#( D)
This is an interval linear equation in X, which does not contain multiple occurrences.
The solution functions could be extracted easily. But, instead of computing the solution
functions of the constraint without taking into account the other constraints, we may
prefer to group together several linear equations in a squared system. Solving the
squared interval linear system allows much more precise approximations of projections
to be computed. (See the following section.) Projection functions obtained by this way
are called # Tay . For example, consider the constraint x
log( by using the
Taylor form on the box D, we obtain the following interval linear equation
log( c)
that is:
log( c) - c/D. The unique solution function of this 1-
dimensional linear equation is straightforward: X =-B/A.
. A third approach [6] does not use any analytical solution function. Instead, it
transforms the constraint C
1, . , k. The mono-variable constraint C j,l on variable x
is obtained by substituting
their intervals for the other variables. The projection # j,j l
is computed thanks to C j,l .
The smallest zero of C j,l in the interval under consideration is a lower bound for the
projection of C j over x j . And the greatest zero of C j,l is an upper bound for that
projection. Hence, an interval with those two zeros as bounds gives an approximation
of the projection. Projection functions computed in that way are called # box .
In [6], the two extremal zeros of C j,l are found by a mono-variable version of the
interval Newton method. 4
Another problem is that the inverse of a nonmonotonic function is not a function over the
intervals. For example the range of the inverse of the function
for an interval
Y is the union of
Y ). It is possible to extend interval arithmetic
in order to handle unions of intervals. A few systems have taken this approach [26,44].
Nevertheless, this approach may lead to a highly increasing number of intervals. The
two other approaches more commonly used consist of computing the smallest interval
encompassing a union of
or to split the problem in several sub-problems in which only intervals appear.
2.2. Filtering algorithm as fixed point algorithms
A filtering algorithm can generally be seen as a fixed point algorithm. In the following,
an abstraction of filtering algorithms will be used: the sequence {D k } of domains generated
by the iterative application of an operator Op :
-#
I( R) n (see Fig. 1).
The operator Op of a filtering algorithm generally satisfies the following three properties
Op( D) #D (contractance).
. Op is conservative; that is, it cannot remove solutions.
. D #
#Op( D) (monotonicity).
Under those conditions, the limit of the sequence {D k }, which corresponds to the greatest
fixed point of the operator Op, exists and is called a closure. We denote it by
#Op( D).
A fixed point for Op may be characterized by a property lc-consistency, called a local
consistency, and alternatively
#Op( D) will be denoted by #
lc( D). The algorithm achieving
filtering by lc-consistency is denoted lc-filtering. A CSP is said to be lc-satisfiable if lc-
filtering of this CSP does not produce an empty domain.
4 The general (multi-variable) interval Newton method is briefly presented in Section 2.3.
Fig. 1. Filtering algorithms as fixed point algorithms.
Consistencies used in numeric CSPs solvers can be categorized in two main classes:
arc-consistency-like consistencies and strong consistencies.
2.3. Arc-consistency-like consistencies
Most of the numeric CSP systems (e.g., BNR-prolog [40], Interlog [13,16], CLP(BNR)
[5], PrologIV [15], UniCalc [3], Ilog Solver [27] and Numerica [46] compute an
approximation of arc-consistency [36] which will be named 2B-consistency in this paper. 5
2B-consistency states a local property on a constraint and on the bounds of the domains of
its variables (B of 2B-consistency stands for bound). Roughly speaking, a constraint C j is
2B-consistent if for any variable x i in
the bounds D i and D i have a support in the
domains of all other variables of C j (w.r.t. the approximation given by # ). 2B-consistency
can be defined in our notation as:
2B-consistent if and only if
A filtering algorithm that achieves 2B-consistency can be derived from Fig. 1 by
instantiating Op as in Operator 1. Note the operator Op 2B applies on the same vector D all
the #
j,i( D) operators.
Operator 1 (2B-consistency filtering operator).
Op
j,1( D), . , #
Fig. 2 shows how projection functions are used by a 2B-consistency filtering algorithm
to reduce the domains of the variables.
Depending on the projection functions used, we obtain different 2B-filtering algorithms
Op nat The operator Op nat will denote Op 2B with # nat . It abstracts the filtering algorithm
presented in [5,17,32]. There are two main differences between our abstraction
and the implementations.
(1) In classic implementations, projection functions are applied sequentially
and not all on the same domain. In the abstraction (and in our non-classic
5 We have a lot of freedom to choose #
j,i( D), so the definition of 2B-consistency here abstracts both 2B-
consistency in [32] and box-consistency in [6].
Fig. 2. 2B-filtering on the constraint system {x 2
implementations) they are applied on the same domain. This has the drawback
of increasing the upper bound of the complexity, but has the advantage of
generating much more "regular" sequences of domains. (See Section 3.2.)
(2) Implementations always applied an AC3-like optimization [36]. It consists of
applying at each iteration only those projection functions that may reduce a
domain: only the projection functions that have in their parameters a variable
whose domain has changed are applied. For the sake of simplicity, AC3-like
optimization does not appear explicitly in this algorithm schema.
Op box This operator denotes Op 2B that uses # box . It abstracts the filtering algorithm
presented in [6,45]. Differences with our abstraction are the same as above.
Op Tay This operator denotes Op 2B that uses # Tay . It abstracts the interval Newton
method [2,37]. The interval Newton method controls in a precise way the order in
which projection functions are computed. It is used for solving squared nonlinear
equation systems such as
0}. The
interval Newton method replaces the solving of the nonlinear squared system by
the solving of a sequence of interval linear squared systems. Each linear system
is obtained by evaluating the interval Jacobi matrix over the current domains,
and by considering the first-order Taylor approximation of the nonlinear system.
The resulting interval linear system is typically solved by the interval Gauss-Seidel
method. The Gauss-Seidel method associates each constraint C i with the
variable x i (after a possible renaming of variables), and loops while applying only
the projection functions # i,i .
To summarize, the main differences with our abstraction are that, in an
implementation, the partial derivatives are recomputed periodically and not at
each step, and that the Gauss-Seidel method does not apply all the projection
functions. A more realistic implementation of the Interval Newton method would
correspond to Operator 2 as follows. 6
Operator 2 (Interval Newton operator).
Op
A i,i
endfor
Note also that, in general, the Gauss-Seidel method does not converge towards
the solution of the interval linear system, but it has good convergence properties
for diagonally-dominant matrices. So, in practice, before solving the linear
system, a preconditioning step is achieved that transforms the Jacobi matrix
into a diagonally dominant matrix. Preconditioning consists of multiplying the
interval linear equation A # by a matrix M , giving the new linear system
. The matrix M is typically the inverse
of the midpoint matrix of A.
A nice property of the interval Newton operator is that in some cases, it is
able to prove the existence of a solution. When Op
Tay( D) is a strict subset of
D, Brouwer's fixed-point theorem applies and states existence and unicity of a
solution in D (cf. [38]).
2.4. Strong consistencies
The idea of constraint satisfaction is to tackle difficult problems by solving easy-to-
solve sub-problems: the constraints taken individually. It is often worth to have a more
global view, which generally leads to a better enclosure of the domains. This is why strong
consistencies have been proposed for solving CSP [21,22]. Their adaptation to numeric
CSPs is summarized in this section. Interval analysis methods such as Op Tay extensively
use another kind of global view: the preconditioning of the Jacobi matrix. Nevertheless,
the need for strong consistencies, although less crucial with interval analysis methods,
may appear for very hard problems such as [43].
Strong consistencies have first been introduced over discrete CSPs (e.g., path-
consistency, k-consistency [21]
)-consistency [22]), and then over numeric CSPs
6 The for-loop corresponds to only one iteration of the Gauss-Seidel method and not to the complete solving
of the interval linear system, which in practice is not useful [24].
(3B-consistency [32] and kB-consistency [33]). kB-consistency is the adaptation
consistency over numeric CSP. Filtering
k)-consistency is done by removing from
each domain values that can not be extended to k variables. kB-consistency ensures that
when a variable is instantiated to one of its two bounds, then the CSP is |k -1|B-satisfiable.
we refer to Operator 1. More generally, as given in Definition 2,
consistency ensures that when a variable is forced to be close to one of its two bounds
(more precisely, at a distance less than w), then the CSP is |k -
1|B( w)-satisfiable. For
simplest presentation,
w)-consistency refers to 2B-consistency.
(kB( w)-consistency). We say that a CSP #X , D,C# is
w)-consistent if
and only if:
#( D, i, w) is |k -
1|B( w)-satisfiable, and
#( D, i, w) is |k -
1|B( w)-satisfiable,
where
#( D, i, w) (respectively
#( D, i, w)) denotes #X , D # , C# where D # is the same
domain as D except that D i is replaced by D
(respectively D i is replaced
by D
The direct filtering operator Op
kB( w) underlying the
w)-consistency uses a kind of
proof by contradiction: the algorithm tries to increase the lower bound D i by proving that
the closure by |k -
1|B( w)-consistency of #D 1 , . , [
+w] , . , D n # is not empty
and tries to decrease the upper bound in a symmetric way.
3B-consistency filtering algorithms, used for example in Interlog, Ilog Solver or
Numerica, can be derived from Fig. 1 by instantiating operator Op to Op 3B as defined
in Operator 3.
Operator 3
w)-consistency filtering operator: Op
the
filtering operator Op
kB( w)( P), with k # 3, is defined as follows:
Op
being computed as follows:
do
while D #
do
while D #
do
endfor
Fig. 3 shows how
w)-filtering uses 2B-filtering.
Fig. 3.
3B( w)-filtering on the constraint system {x 2
Implementations using this schema may be optimized considerably, but we do not
need to go into details here. The reader is referred to [32] for the initial algorithm,
and to [12] which studies the complexity of an unpublished implementation we used
for years (see for example [30]) and that is more efficient than the algorithm published
in [32].
The algorithm that achieves box-consistency is closely related to 3B-consistency.
Indeed, box-consistency can be seen as a kind of one-way 3B-consistency limited to one
constraint. The reader can found in [14] a theoretical comparison between box-consistency
and 3B-consistency.
3. Acceleration of filtering techniques
The question of choosing the best filtering algorithm for a given constraint system is an
open problem. Some preliminary answers may come from the observation that the above
fixed point algorithms suffer from two main drawbacks, which are tightly related:
. the existence of "slow convergences", leading to unacceptable response times for
certain constraint systems;
. "early quiescence" [17], i.e., the algorithm stops before reaching a good approximation
of the set of possible values.
The focus of this paper is on the first drawback. Its acuteness varies according to the Op
operator:
Op nat Due to its local view of constraints, Op nat often suffers from early quiescence, but
its simplicity makes it the most efficient operator to compute, and many problems
are best solved by this filtering operator (e.g., Moreaux problem [46]). At first
sight, one could think that slow convergence phenomena do not occur very often
with Op nat . It is true that early quiescence of Op nat is far more frequent than
slow convergence. However, Op nat is typically interleaved with a tree search
(or is called from inside another higher-order filtering algorithm). During this
interleaved process, slow convergence phenomena may occur and considerably
increase the required computing time.
Op box The comments above remain true for Op box , although it may take more time to be
computed and may perform some stronger pruning in some cases.
Op Tay The interval Newton operator, on the one hand, may have a very efficient behavior.
It may have an asymptotically quadratic convergence when it is used near the
solution. In our experience, quadratic convergence is essential to compute precise
roots of nonlinear systems of equations.
On the other hand, far from the solution, the Jacobi matrix has a great chance
of being singular, which typically leads to the "early quiescence" problem. Hence
Op Tay does not have really slow convergence problems, but it needs expensive
computation since the preconditioning of the Jacobi matrix needs to compute an
inversion of its midpoint matrix. On some problems like Moreaux problem [46]
with huge dimension n # 320, Op Tay is very expensive, whereas by Op nat the
solution is found quickly.
Op
w)-consistency filtering algorithms may perform a very strong pruning,
making the tree-search almost useless for many problems.
For example, we have tried
w)-filtering over the transistor problem [41,43].
It finds the unique solution, without search, in the same cpu time as the
filtering search method used in [41]. We have also tried
w)-filtering over
the benchmarks listed in [45]. They are all solved without search (only p choice
points are made when the system has Unfortunately,
most of the time slow convergence phenomena occur during a
w)-filtering.
The different filtering algorithms are thus complementary and the more robust way to
solve a problem is probably to use several of them together. In the fixed point schema of
Fig. 1, the operator Op would be the result of the composition of some operators above. In
the remainder of this section, we focus on the problem of slow convergence that occurs in
Op nat and Op
kB( w) .
The observation of many slow convergences of those algorithms led us to notice
that some kinds of "regularity" often exist in a slow convergence phenomenon. Our
intuition was that such regularities in the behavior of algorithms could be exploited to
optimize their convergence. As seen in Section 2, the filtering algorithms are abstracted
through a sequence of interval vectors. Accelerating a filtering algorithm thus consists in
transforming that sequence into another sequence, hoping it converges faster. In numerical
analysis, engineers use such transformation methods. Unfortunately, they cannot be sure of
the reliability of their results. But this does not change the essence of usual floating-point
computation: unreliability is everywhere! For filtering techniques, the completeness of the
results must be guaranteed, or in other words, no solution of the CSP can be lost. Thus the
question of reliability becomes crucial. This leads us to define a reliable transformation.
Definition 3 (Reliable transformation). Let {S n } be a sequence that is complete for a set
of solutions Sol: #k,Sol # S k . Let A be a transformation and let {T n
A( {S n }). A is a
reliable transformation for {S n } w.r.t. Sol if and only if
The practical interest of a reliable transformation is directly related to its ability to
accelerate the greatest number of sequences. Acceleration of a sequence is traditionally
defined in terms of improvement of the convergence order of the sequence. Convergence
order characterizes the asymptotic behavior of the sequences. (See Section 3.2 for a formal
definition of the convergence order.) In addition to convergence order, some practical
criteria may be of importance, like, for example, the time needed to compute a term of
a sequence.
To build a reliable transformation that accelerates the original sequence, we will exploit
some regularities in the sequence. When we detect a regularity in the filtering sequence, the
general idea is to assume that this regularity will continue to appear in the following part of
the sequence. The regularities that we are looking for are those which allow computations
to be saved. A first kind of regularity that we may want to exploit is cyclicity. Section 3.1
summarizes a previous work based on that idea. Another kind of regularity, that can be
caught by extrapolation methods, is then developed in Section 3.2.
3.1. A previous work: dynamic cycle simplification
This subsection summarizes a previous work [34,35], built on the idea that there is
a strong connection between the existence of cyclic phenomena and slow convergence.
More precisely, slow convergence phenomena move very often into cyclic phenomena after
a transient period (a kind of stabilization step). The main goal is to dynamically identify
cyclic phenomena while executing a filtering algorithm and then to simplify them in order
to improve performance.
This subsection is more especially dedicated to the acceleration of Op nat and Op box
algorithms,
. a direct use of those accelerated algorithms also leads to significant gain in speed
for
w)-filtering algorithms since they typically require numerous computations of
. this approach could be generalized to identify cyclic phenomena in
w)-filtering
algorithms.
Considering the application of Op 2B over D i , there may exist several projection
functions that perform a reduction of the domain of a given variable. As Op 2B performs an
intersection, and since domains are intervals, there may be 0, 1 or 2 projection functions
of interest for each variable. (One that gives the greatest lower bound, one that gives the
lowest upper bound.) Call these projection functions relevant for D i , and denote by R i the
set of those relevant projection functions for D i .
Thus we have
2B( D i ); that is, if we know in advance all the R i , we
can compute #
2B( D) more efficiently by applying only relevant projection functions. This
is precisely the case in a cyclic phenomenon.
We will say we have a cyclic phenomenon of period p when:
#i <N, R i+p =R i ,
where N is a "big" number.
Now, consider R i and R i+1 . If a projection function is in R i+1 , this is due to the
reduction of domains performed by some projection functions in R i . We will say that
f # R j depends on g # R i , where j > i , denoted by g # f if and only if g #= f and g
computes the projection over a variable that belongs to
The dependency graph is the graph whose vertices are pairs #f, i#, where f # R i ,
and arcs are dependency links. (See Fig. 4(a).) If we assume that we are in a cyclic
phenomenon, then the graph is cyclic. (See Fig. 4(b) where -
denotes all the steps i
such that i mod According to this assumption, two types of simplification can be
performed:
. Avoid the application of non-relevant projection functions.
. Postpone some projection functions: a vertex #f, i# which does not have any successor
in the dynamic dependency graph corresponds to a projection function that can be
postponed. Such a vertex can be removed from the dynamic dependency graph.
Applying this principle recursively will remove all non-cyclic paths from the graph.
For instance, in graph (b) of Fig. 4, all white arrows will be pruned.
When a vertex is removed, the corresponding projection function is pushed onto a
stack. (The removing order must be preserved.) Then, it suffices to iterate on the
simplified cycle until a fixed point is reached, and, when the fixed point has been
reached, to evaluate the stacked projection functions.
The transformation that corresponds to the above two simplifications together is clearly
a reliable transformation. It does not change the convergence order, but is in general
an accelerating transformation. In [34] first experimental results are reported, gains in
Fig. 4. Dynamic dependency graphs.
efficiency range from 6 to 20 times faster for 2B-filtering and
w)-filtering. More
complete experiments have been performed in [23], but, for the sake of simplicity, only the
first simplification (applying only relevant projection functions) has been tried. Different
combinations of several improvements of 2B-filtering are tested. For all problems, the
fastest combination uses cycle simplification. Ratio in CPU time varies from 1 to 20
compared with the same combination without cycle simplification.
3.2. Extrapolation
The previous section aims at exploiting cyclicity in the way projection functions are
applied. The gain is in the computation of each term of {D n }, but the speed of convergence
of D n is unchanged. Now we address how to accelerate the convergence of {D n }.
{D n } is a sequence of intervals. Numerical analysis provides different mathematical
tools for accelerating the convergence of sequences of real numbers. Extrapolation methods
are especially interesting for our purposes, but {D n } is a sequence of interval vectors and
there does not exist any extrapolation method to accelerate interval sequences. Nevertheless
an interval can be seen as two reals and D can be seen as a 2-column matrix of reals. The
first column is the lower bounds, and the second the upper bounds. Thus we can apply
the existing extrapolation methods. The field of extrapolation methods, for real number
sequences, is first summarized; for a deeper overview see [10]. Then we will show how to
use extrapolation methods for accelerating filtering algorithms.
3.2.1. Extrapolation methods
Let {S n }
.) be a sequence of real numbers. A sequence {S n } converges if
and only if it has a limit S: lim n# S We say that the numeric sequence {S n } has
the order r # 1 if there exist two finite constants A and B such that 7
A# lim
# B.
A quadratic sequence is a sequence which has the order 2. We say that a sequence is linear
if
lim
The convergence order enables us to know exactly the convergence speed of the sequence.
For example [8], for linear sequences with we obtain a significant number
every 2500 iterations. Whereas, for sequences of order 1.01, the number of significant
numbers doubles every 70 iterations. These examples show the interest of using sequences
of order r > 1.
Accelerating the convergence of a sequence {S n } amounts of applying a transformation
A which produces a new sequence {T n }: {T n }
7 For more details, see [10].
As given in [10], in order to present some practical interest, the new sequence {T n } must
exhibit, at least for some particular classes of convergent sequences {S n }, the following
properties:
(1) {T n } converges to the same limit as {S n }: lim n# T
(2) {T n } converges faster than {S n }: lim
These properties do not hold for all converging sequences. Particularly, a universal
transformation A accelerating all converging sequences cannot exist [18]. Thus any
transformation can accelerate a limited class of sequences. This leads us to a so-called
kernel 8 of the transformation which is the set of convergent
A well-known transformation is the iterated # 2 process from Aitken [1]
which gives a sequence {T n } of nth term
The kernel of # 2 process is the set of the converging sequences which have the form
Aitken's transformation has a nice property [10]: it
transforms sequences with linear convergence into sequences with quadratic convergence.
We can apply the transformation several times, leading to a new transformation. For
example, we can apply # 2 twice, giving #
2( {S n })). Many acceleration transformations
(G-algorithm, #-algorithm, # -algorithm, overholt-process, .) are multiple application
of transformations. See [11] and [9] for attempts to build a unifying framework of
transformation. Scalar transformations have been generalized to the vectoral and matrix
cases.
Two kinds of optimization for filtering algorithms are now given. The first one makes
a direct use of extrapolation methods and leads to a transformation which is not reliable.
The second one is a reliable transformation.
3.2.2. Applying extrapolation directly
Let {D n } be a sequence generated by a filtering algorithm. We can naively apply
extrapolation method directly on the main sequence {D n }. The experimental results given
in the rest of the paper are for scalar extrapolations, which consider each element of the
matrix-each bound of a domain- independently of the others. For example, the scalar
process uses for each bound of domain the last three different values to extrapolate a
value.
8 The definition of the kernel given here considers only converging sequences.
Accelerating directly the convergence of {D n } can dramatically boost the convergence,
as illustrated in the following problem:
1000] , y # [0,
1000] , z # [0,
#] .
The following table shows the domain of the variable t in the 278th, 279th, 280th and 281st
iterations of 3B-filtering (after a few seconds on a Sun Sparc 5). The precision obtained is
it t
278 [3.14133342842583 . , 3.14159265358979 .
.] By applying Aitken's process on the domains of the iterations 278, 279 and 280,
we obtain the domain below. The precision of this extrapolated domain is 10 -14 . Such
precision has not been obtained after 5 hours of the 3B-filtering algorithm without
extrapolation.
[3.14159265358977 . , 3.14159265358979 .
.] Let's take another example:
Table
1 shows the domain of the variables x and y in the first, second and third iterations
of
5B( w)-filtering. The precision obtained is about 10 -6 .
Table
5B( w)-filtering on the problem above
Iteration domains for x and y
By applying the # 2 process on the domain of the iterations 1, 2 and 3, we obtain the
domains below. The precision of this extrapolated domain is 10 -19 . Such a precision has
not been obtained after many hours of the
w)-filtering algorithm without extrapolation.
[-8.93e-19,-8.85e-19] This result is not surprising since we have the following proposition:
Theorem 1 (Convergence property of Aitken's process [7]). If we apply # 2 on some
which converges to S and if we have:
lim
then the sequence #
converges to S, and more quickly
Note that, in the solution provided by Aitken's process, we have a valid result for x ,
but not for y . This example shows that extrapolation methods can lose solutions. The
extrapolated sequence may or may not converge to the same limit as the initial sequence.
This anomaly can be explained by the kernel of the transformation: when the initial
sequence belongs to the kernel, then we are sure that the extrapolated sequence converges
to the same limit. Furthermore, intuition suggests that, if the initial sequence is "close" to
the kernel then there are good hopes to get the same limit. However, it may be the case that
the limits are quite different. This is cumbersome for the filtering algorithms which must
ensure that no solution is lost.
We propose below a reliable transformation that makes use of extrapolation.
3.2.3. Reliable transformation by extrapolation
The reliable transformation presented in this section is related to the domain sequences
generated by
w)-filtering algorithms. For the sake of simplicity, we will only deal with
3B( w)-filtering, but generalisation is straightforward.
This transformation is reliable thanks to the proof-by-contradiction mechanism used in
3B( w)-algorithm: it tries to prove-with a 2B-filtering-that no solution exists in a subpart
of a domain. If such a proof is found, then the subpart is removed from the domain, else
the subpart is not removed. The point is that we may waste a lot of time trying to find
a proof that does not exist. If we could predict with good probability that such a proof
does not exist, we could save time in not trying to find it. Extrapolation methods can do
the job. The idea is simply that if an extrapolated sequence converges to a 2B-satisfiable
CSP (which can be quickly known), then it is probably a good idea not to try to prove
the 2B-unsatisfiability. This can be done by defining a new consistency, called
consistency, that is built upon the existence of a predicate
2B-predict( D) that predicts
2B-satisfiability. (P2B stands for 2B based on Prediction.)
Definition 4 (P2B-consistency). A CSP #X , D,C# is P2B-consistent if and only if it is
2B-consistent or 2B-predict(D) is true.
Op
Op
Fig. 5. P 2B-consistency filtering schema.
may use extrapolation methods, for example the # 2 process. Thus,
the prediction 2B-predict(D) may be wrong, but from the Proposition 1 we know that
a filtering algorithm by P2B-consistency cannot lose any solutions.
Proposition 1. #
D).
The proof is straightforward from the definition.
A filtering algorithm that achieves P2B-consistency can be a fixed point algorithm
where Op is as defined in Fig. 5. The main difference with Op 2B is, before testing for 2B-
satisfiability, to try in the function 2B-predict, to predict 2B-satisfiability by extrapolation
methods. Following that idea, the algorithm schema for
Fast-3B( w)-consistency can be
modified, as given in Operator 4. (We may obtain in the same way the algorithm schema
for
Fast-kB( w)-consistency. It needs a P -kB operator that applies an extrapolator over the
domains generated by the kB operator.)
Operator 4. Let the filtering operator Op
Fast-3B( w) is defined as follows:
Op
being computed as follows:
do
while D #
do
while D #
do
endfor
The following proposition means that this algorithm schema allows acceleration
methods to be applied while keeping the completeness property of filtering algorithms.
We thus have a reliable transformation.
Proposition 2 (Completeness).
Fast-kB( w)-algorithm does not lose any solutions.
The proof is built on the fact that a domain is reduced only when we have a proof-
by |k -
1|B( w)-satisfiability and without extrapolation-that no solution exists for the
removed part.
Table
w)-filtering results over some benchmarks
Problem
nbr-#( Fast-kB)
nbr-#( kB)
time( Fast-kB)
time( kB)
brown
caprasse 0.46 0.60
chemistry 0.61 0.69
neuro-100 0.53 0.66
The counterpart of this result is that improvements in efficiency of
w)-filtering
compared with
w)-filtering may be less satisfactory than improvement provided
by direct use of extrapolation. Another counterpart is that the greatest fixed point of
Op
Fast-kB( w) is generally greater than the greatest fixed point of Op
kB( w) .
In practice the overhead in time has always been negligible and the improvement in
efficiency may vary from 1 to 10. Table 2 compares Fast-3B-filtering with 3B-filtering
over some problems taken from [23,46]. It gives the ratios in time (
time( Fast-kB)
time( kB)
) and in
number of projection function calls (
nbr-#( Fast-kB)
nbr-#( kB) ) for the two algorithms.
4. Related works
Two methods commonly used for solving numeric CSPs can be seen as reliable
transformations: preconditioning and adding redundant constraints.
4.1. Preconditioning in the interval Newton operator
Numeric CSPs allow general numeric problems to be expressed, without any limitation
on the form of the constraints. In numerical analysis, many specific cases of numeric CSPs
have been studied. The preconditioning of squared linear systems of equations is among
the most interesting results for its practical importance.
We say that a linear system of equation
is near 1.
In practice, a well conditioned system is better solved than an ill conditioned one.
Preconditioning methods transform the system to a new system A #
has the same solution but is better conditioned than the first system. Solving A #
better precision and more reliable computations than solving the original system. A classic
preconditioning method consists of multiplying the two sides of the system by an
approximate inverse M of A. Thus we have A # =MA and b # =Mb.
In interval analysis, the interest of preconditioning is not reliability, which already exists
in interval methods, but precision and convergence. As already presented in Section 2.3,
preconditioning is a key component of the interval Newton method. Experimental results
(for example see [28,45]) show the effectiveness of preconditioning for solving squared
nonlinear systems of equations. Many theoretical results can be found in [2,28,38,39].
4.2. Redundant constraints
A classic reliable transformation is the adding of some redundant constraints to the
original constraint system. This approach is very often used for discrete CSPs to accelerate
the algorithms. It is not the case for interval analysis methods over numeric CSPs, since
they exploit the fact that the system is square. For artificial intelligence methods over
numeric CSPs, Benhamou and Granvilliers [4] propose to add some redundant polynomial
constraints that are automatically generated by a depth-bounded Groebner bases algorithm.
5. Conclusion and perspectives
Our aim in this paper was to accelerate existing filtering algorithms. That led us
to the concept of reliable transformation over the filtering algorithms, which preserves
completeness of the filtering algorithms. Two kinds of reliable transformation have been
proposed. They exploit some regularities in the behavior of the filtering algorithms. The
first one is based on cyclic phenomena in the propagation queue. The second one is an
extrapolation method: it tries to find a numeric equation satisfied by the propagation queue
and then solves it.
A first perspective is to detect other kinds of regularities and to exploit them.
A reliable transformation always has some intrinsic limitations; for example, logarithmic
sequences cannot be accelerated by extrapolation methods. However, in that case, the
cyclic phenomena simplification may improve the running time. Thus, combining different
reliable transformations to try to accumulate the advantages of each transformation may
be of high interest. Finally, a direction of research that could be fruitful comes from the
remark that algorithms are designed with efficiency and simplicity in mind only. Regularity
is never considered as an issue. Perhaps it is time to consider it as an issue, and to try to
make more regular the existing algorithms in order to exploit their new regularities.
Acknowledgements
We would like to thank Christian Bliek, Michel Rueher and Patrick Taillibert for their
constructive comments on an early draft of the paper, and Kathleen Callaway for a lot
of English corrections. This work has been partly supported by the Ecole des Mines de
Nantes.
--R
On Bernoulli's numerical solution of algebraic equations
Introduction to Interval Computations
Automatic generation of numerical redundancies for non-linear constraint solving
Applying interval arithmetic to real
CLP(intervals) revisited
Algorithmes d'Acc-l-ration de la Convergence: -tude Num-riques
Derivation of extrapolation algorithms based on error estimates
Extrapolation Methods
A general extrapolation procedure revisited
Improved bounds on the complexity of kB-consistency
Constraint logic programming on numeric intervals
A note on partial consistencies over continuous domains solving techniques
Interlog 1.0: Guide d'utilisation
Constraint propagation with interval labels
Arc consistency for continuous variables
Local consistency for ternary numeric constraints
A sufficient condition for backtrack-bounded search
Consistances locales et transformations symboliques de contraintes d'intervalles
Global Optimization Using Interval Analysis
Consistency techniques for continuous constraints
Constraint reasoning based on interval arithmetic: The tolerance propagation approach
ILOG Solver 4.0
Continuous Problems
Computational Complexity and Feasibility of Data Processing and Interval Computations
Acceleration methods for numeric CSPs
Consistency techniques for numeric CSPs
Dynamic optimization of interval narrowing algorithms
Boosting the interval narrowing algorithm
Consistency in networks of relations
Interval Analysis
Interval Methods for Systems of Equations
A simple derivation of the Hansen-Bliek-Rohn-Ning-Kearfott enclosure for linear interval equations
Extending prolog with constraint arithmetic on real intervals
A constraints satisfaction approach to a circuit design problem
Computer Methods for the Range of Functions
Experiments using interval analysis for solving a circuit design problem
Hierarchical arc consistency applied to numeric constraint processing in logic programming
Solving polynomial systems using branch and prune approach
Modeling Language for Global Optimization
--TR
A sufficient condition for backtrack-bounded search
Constraint propagation with interval labels
Constraint reasoning based on interval arithmetic
Arc-consistency for continuous variables
CLP(intervals) revisited
A derivation of extrapolation algorithms based on error estimates
Solving Polynomial Systems Using a Branch and Prune Approach
Acceleration methods of numeric CSPc
Synthesizing constraint expressions
A Constraint Satisfaction Approach to a Circuit Design Problem
A Note on Partial Consistencies over Continuous Domains
--CTR
Yahia Lebbah , Claude Michel , Michel Rueher, Using constraint techniques for a safe and fast implementation of optimality-based reduction, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Yahia Lebbah , Claude Michel , Michel Rueher, A Rigorous Global Filtering Algorithm for Quadratic Constraints, Constraints, v.10 n.1, p.47-65, January 2005 | extrapolation methods;interval analysis;propagation;acceleration methods;filtering techniques;numeric constraint satisfaction problem;strong consistency;interval arithmetic;nonlinear equations;pruning |
604379 | A Simulation Study of Decoupled Vector Architectures. | Decoupling techniques can be applied to a vector processor, resulting in a large increase in performance of vectorizable programs. We simulate a selection of the Perfect Club and Specfp92 benchmark suites and compare their execution time on a conventional single port vector architecture with that of a decoupled vector architecture. Decoupling increases the performance by a factor greater than 1.4 for realistic memory latencies, and for an ideal memory system with zero latency, there is still a speedup of as much as 1.3. A significant portion of this paper is devoted to studying the tradeoffs involved in choosing a suitable size for the queues of the decoupled architecture. The hardware cost of the queues need not be large to achieve most of the performance advantages of decoupling. | Introduction
Recent years have witnessed an increasing gap between processor speed and memory
speed, which is due to two main reasons. First, technological improvements in cpu
speed have not been matched by similar improvements in memory chips. Second,
the instruction level parallelism available in recent processors has increased. Since
several instructions are being issued at the same processor cycle, the total amount
of data requested per cycle to the memory system is much higher. These two factors
have led to a situation where memory chips are on the order of 10 to a 100 times
slower than cpus and where the total execution time of a program can be greatly
dominated by average memory access time.
Current superscalar processors have been attacking the memory latency problem
through basically three main types of techniques: caching, multithreading and
decoupling (which, sometimes, may appear together). Cache-based superscalar processors
reduce the average memory access time by placing the working set of a program
in a faster level in the memory hierarchy. Software and hardware techniques
such as [5, 23] have been devised to prefetch data from high levels in the memory
hierarchy to lower levels (closer to the cpu) before the data is actually needed. On
top of that, program transformations such as loop blocking [16] have proven very
useful to fit the working set of a program into the cache. Recently, address and
data prediction receive much attention as a potential solution for indirectly masking
memory latency [21].
Multithreaded processors [1, 30] attack the memory latency problem by switching
between threads of computations so that the amount of parallelism exploitable aug-
ments, the probability of halting the cpu due to a hazard decreases, the occupation
of the functional units increases and the total throughput of the system is improved.
While each single thread still pays latency delays, the cpu is (presumably) never
idle thanks to this mixing of different threads of computation.
Decoupled scalar processors [27, 25, 18] have focused on numerical computation
and attack the memory latency problem by making the observation that the execution
of a program can be split into two different tasks: moving data in and out of the
processor and executing all arithmetic instructions that perform the program com-
putations. A decoupled processor typically has two independent processors (the
address processor and the computation processor) that perform these two tasks
asynchronously and that communicate through architectural queues. Latency is
hidden by the fact that usually the address processor is able to slip ahead of the
computation processor and start loading data that will be needed soon by the computation
processor. This excess data produced by the address processor is stored
in the queues, and stays there until it is retrieved by the computation processor.
Vector machines have traditionally tackled the latency problem by the use of long
vectors. Once a (memory) vector operation is started, it pays for some initial (po-
tentially long) latency, but then it works on a long stream of elements and effectively
amortizes this latency across all the elements. Although vector machines have been
very successful during many years for certain types of numerical calculations, there
is still much room for improvement. Several studies in recent years [24, 8] show how
the performance achieved by vector architectures on real programs is far from the
theoretical peak performance of the machine. In [8] it is shown how the memory
port of a single-port vector computer was heavily underutilized even for programs
that were memory bound. It also shows how a vector processor could spend up to
50% of all its execution cycles waiting for data to come from memory.
Despite the need to improve the memory response time for vector architectures,
it is not possible to apply some of the hardware and software techniques used by
scalar processors because these techniques are either expensive or exhibit a poor
performance in a vector context. For example, caches and software pipelining are
two techniques that have been studied [17, 19, 28, 22] in the context of vector
processors but that have not been proved useful enough to be in widespread use in
current vector machines.
The conclusion is that in order to obtain full performance of a vector processor,
some additional mechanism has to be used to reduce the memory delays (com-
ing from lack of bandwidth and long latencies) experienced by programs. Many
techniques can be borrowed from the superscalar microprocessor world. In this
paper we focus on decoupling, but we have also explored other alternatives such as
multithreading [12] and out-of-order execution [13].
The purpose of this paper is to show that using decoupling techniques in a vector
processor [11], the performance of vector programs can be greatly improved. We will
show how, even for an ideal memory system with zero latency, decoupling provides
a significant advantage over standard mode of operation. We will also present data
showing that for more realistic latencies, decoupled vector architectures perform
substantially better than non-decoupled vector architectures. Another benefit of
JOURNAL OF SUPERCOMPUTING 3
decoupling is that it also allows to tolerate latencies inside the processor, such as
functional unit and register crossbar latencies.
This paper is organized as follows. Section 2 describes both the baseline and
decoupled architectures studied throughout this paper. In section 3 we discuss
our simulation environment and the benchmark programs used in the experiments
presented. Section 4 provides a background analysis of the performance of a traditional
vector machine. In section 5 we detail the performance of our decoupled
vector proposal. Finally, section 6 presents our conclusions and future lines of work.
2. Vector Architectures and Implementations
This study is based on a traditional vector processor and numerical applications,
primarily because of the maturity of compilers and the availability of benchmarks
and simulation tools. We feel that the general conclusions will extend to other vector
applications, however. The decoupled vector architecture we propose is modeled
after a Convex C3400. In this section we describe the base C3400 architecture and
implementation (henceforth, the reference architecture), and the decoupled vector
architecture (generically referred to as DVA).
The main implication of the election of a C3400 is that this study is restricted
to the class of vector computers having one memory port and two functional units.
It is also important to point out that we used the output of the Convex compilers
to evaluate our decoupled architecture. This means that the proposal studied in
this paper is able to execute in a fully transparent manner an already existing
instruction set.
2.1. The Reference Architecture
The Convex C3400 [7] consists of a scalar unit and an independent vector unit (see
fig. 1). The scalar unit executes all instructions that involve scalar registers
S registers), and issues a maximum of one instruction per cycle. The vector unit
consists of two computation units (FU1 and FU2) and one memory accessing unit
(MEM). The FU2 unit is a general purpose arithmetic unit capable of executing all
vector instructions. The FU1 unit is a restricted functional unit that executes all
vector instructions except multiplication, division and square root. Both functional
units are fully pipelined. The vector unit has 8 vector registers which hold up
to 128 elements of 64 bits each. The eight vector registers are connected to the
functional units through a restricted crossbar. Pairs of vector registers are grouped
in a register bank and share two read ports and one write port that links them to
the functional units. The compiler is responsible for scheduling vector instructions
and allocating vector registers so that no port conflicts arise.
@
Fetch
Decode
unit
S-regs
A-regs
R-XBAR
W-XBAR
Figure
1. The reference vector architecture modeled after a Convex C3400.
2.2. The Decoupled Vector Architecture
The decoupled vector architecture we propose uses a fetch processor to split the
incoming, non-decoupled, instruction stream into three different decoupled streams
(see fig. 2). Each of these three streams goes to a different processor: the address
processor (AP), that performs all memory accesses on behalf of the other two
processors, the scalar processor (SP), that performs all scalar computations and the
vector processor (VP), that performs all vector computations. The three processors
communicate through a set of implementational queues and proceed independently.
This set of queues is akin to the implementational queues that can be found in
the floating point part of the R8000 microprocessor[15]. The main difference of this
decoupled architecture with previous scalar decoupled architectures such as the ZS-
1 [26], the MAP-200 [6], PIPE [14] or FOM [4], is that it has two computational
processors instead of just one. These two computation processors, the SP and the
VP, have been split due to the very different nature of the operands on which they
work (scalars and vectors, respectively).
The fetch processor fetches instructions from a sequential, non-decoupled instruction
stream and translates them into a decoupled version. The translation is such
that each processor can proceed independently and, yet, synchronizes through the
communication queues when needed. For example, when a memory instruction that
loads register v5 is fetched by the FP, it is translated into two pseudo-instructions:
a load instruction, which is sent to the AP, that will load data into the vector load
data queue (VLDQ, queue no. 1 in fig. 2), and a qmov instruction, sent to the
VP, that dictates a move operation between the VLDQ and the final destination
register v5. It is important to note that the qmov's generated by the FP are not
JOURNAL OF SUPERCOMPUTING 5
(2)
@ (1)
Figure
2. The decoupled vector architecture studied in this paper. Queue names: (1) vector
load data queue -VLDQ, (2) vector store data queue -VSDQ, (3) address load queue -ALQ, (4)
address store queue -ASQ, (5) scalar load data queue -SLDQ, (6) scalar store data queue -SSDQ,
Scalar-Address Control Queues, Vector-Address Control Queue, (10/11) Scalar-Vector
Control Queues
"instructions" in the real sense, i.e., they do not belong to the programmer visible
instruction set. These qmov opcodes are hidden inside the implementation.
Note that the total hardware added to the original reference architecture shown
in figure 1 consists only of the communication queues and a private decode unit
for each one of the three processors. The resources inside each processor are the
same in the decoupled vector architecture and in the reference architecture. It
is worth noting, though, that while most queues added are scalar queues and,
therefore, require a small amount of extra area, the VLDQ and VSDQ hold full
vector registers (queues 2 and 3 in Fig. 2). That is, each slot in these queues
is equivalent to a normal vector register of 128 elements, thus requiring 1Kb of
storage space. One of the key points in this architecture will be to achieve good
performance with relatively few slots in these two queues.
The address processor performs all memory accesses, both scalar and vector, as
well as all address computations. Scalar memory accesses go first through a scalar
cache that holds only scalar data. Vector accesses do not go through the cache and
access main memory directly. There is only one pipelined port to access memory
that has to be shared by all memory accesses. The address processor inserts load
instructions into the Address Load Queue (ALQ) and store instructions into the
Address Store Queue (ASQ). Stores stay in the queue until their associated data
6 ROGER ESPASA AND MATEO VALERO
shows up either in the output queue of the VP (the vector store data queue -
VSDQ), or in the output queue of the SP (the scalar store data queue -SSDQ).
When either a load or a store becomes ready, i.e, it has no dependencies and its
associated data, if necessary, is present, it is sent over the address bus as soon as
it becomes available. In the case of having both a load and a store ready, the AP
always gives priority to loads.
To preserve the sequential semantics of a program, the address processor needs to
ensure a safe ordering between the memory instructions held in the ALQ and ASQ.
All memory accesses are processed in two steps: first, their associated "memory
region" is computed. Second, this region is used to disambiguate the memory
instruction against all previous memory instructions still being held in the address
queues of the AP. Using this disambiguation information, a dependency scoreboard
is maintained. This scoreboard ensures that (1) all loads are executed in-order, (2)
all stores are executed in-order and (3) loads can execute before older stores if their
associated memory regions do not overlap at all. When dependences are found,
the scoreboard guarantees that loads and stores will be performed in the original
program order so that correctness is guaranteed.
A "memory region" is defined by a 5-tuple: h@
are the start and end addresses, resp., of a consecutive region of bytes in
memory, and vl, vs, and sz are the vector length, vector stride and access granularity
needed by vector memory operations. The end address, @ 2
, is computed as @ 1
sz. For scalar memory accesses, vl is set to 1 and vs to 0. For the
special case of gathers and scatters, which can not be properly characterized by a
memory region, @ 1
is set to 0 and @ 2
is set to 2 so that the scoreboard will
find a dependence between a gather/scatter and all previous and future memory
instructions.
The vector processor performs all vector computations. The main difference between
the VP and the reference architecture is that the VP has two functional units
dedicated to move data in and out of the processor. This two units, the QMOV
units, are able to move data from the VLDQ data queue (filled by AP) into the
vector registers and move data from the registers into the VADQ (which will be
drained by AP sending its contents to memory). We have included two QMOV
units instead of one because otherwise the VP would be paying a high overhead in
some very common sequences of code, when compared to the reference architecture.
The set of control queues connecting the three processors, queues 7-11 in Fig. 2,
are needed for those instructions that have mixed operands. The most common case
is in vector instructions, that can have a scalar register as a source operand (i.e.,
mul v0,s3 -? v5). Other cases include mixed A- and S- register instructions,
vector gathers that require an address vector to be sent to the AP, and vector
reductions that produce a scalar register as a result.
JOURNAL OF SUPERCOMPUTING 7
3. Methodology
3.1. Simulation Environment
To asses the performance benefits of decoupled vector architectures we have taken
a trace driven approach. The Perfect Club and Specfp92 programs have been
chosen as our benchmarks [3]. The tracing procedure is as follows: the Perfect
Club programs are compiled on a Convex C3480 [7] machine using the Fortran
compiler (version 8.0) at optimization level -O2 (which implies scalar optimizations
plus vectorization). Then the executables are processed using Dixie [9], a tool that
decomposes executables into basic blocks and then instruments the basic blocks
to produce four types of traces: a basic block trace, a trace of all values set into
the vector length register, a trace of all values set into the vector stride register
and a trace of all memory references (actually, a trace of the base address of all
memory references). Dixie instruments all basic blocks in the program, including
all library code. This is especially important since a number of Fortran intrinsic
routines (SIN, COS, EXP, etc.) are translated by the compiler into library calls.
This library routines are highly vectorized and tuned to the underlying architecture
and can represent a high fraction of all vector operations executed by the program.
Thus it is essential to capture their behavior in order to accurately model the
execution time of the programs.
Once the executables have been processed by Dixie, the modified executables are
run on the Convex machine. This runs produce the desired set of traces that accurately
represent the execution of the programs. This trace is then fed to two different
simulators that we have developed: the first simulator is a model of the Convex
C34 architecture and is representative of single memory port vector computers. The
second simulator is an extension of the first, where we introduce decoupling. Using
these two cycle-by-cycle simulators, we gather all the data necessary to discuss the
performance benefits of decoupling.
3.2. The benchmark programs
Because we are interested in the benefits of decoupling for vector architectures,
we selected benchmark programs that are highly vectorizable (- 70%). From all
programs in the Perfect and Specfp92 benchmarks we chose the 10 programs that
achieve at least 70% vectorization. Table 1 presents some statistics for the selected
Perfect Club and Specfp92 programs. Column number 2 indicates to what suite
each program belongs. Column 3 presents the total number of memory accesses,
including vector and scalar and load and store accesses. Next column is the total
number of operations performed in vector mode. Column 5 is the number of scalar
instructions executed. The sixth column is the percentage of vectorization of each
program. We define the percentage of vectorization as the ratio between the number
of vector operations and the total number of operations performed by the program.
Finally column seven presents the average vector length used by vector instructions,
and is the ratio of vector operations over vector instructions.
Table
1. Basic operation counts for the Perfect Club and
programs (Columns 3-5 are in millions).
Mem Vect Scal % avg.
Program Suite Ops Ops Ins Vect VL
hydro2d Spec 1785 2203 23 99.0 101
arc2d Perf. 1959 2157
flo52 Perf. 706 551
su2cor Spec 1561 1862 66 95.7 125
bdna Perf. 795 889 128 86.9 81
trfd Perf. 826 438 156 75.7 22
dyfesm Perf. 502 298 108 74.7 21
The most important thing to remark from table 1 is that all our programs are
memory bound when run on the reference machine. If we take column labeled "Vect
Ops" and divide it by 2 we get the minimum number of cycles required to execute
all vector computations on the two vector functional units available. Comparing
now column "Mem Ops" against the result of our division we will see that the
bottleneck for all these programs is always the memory port. That is, the absolute
minimum execution time for each of these programs is determined by the total
amount of memory accesses it performs.
This remark is worth keeping in mind, since, as following sections will show, even
if the memory port is the bottleneck for all programs, its usage is not always as
good as one would intuitively expect.
4. Bottlenecks in the Reference Architecture
First we present an analysis of the execution of the ten benchmark programs when
run through the reference architecture simulator.
Consider only the three vector functional units of our reference architecture (FU2,
FU1 and MEM). The machine state can be represented with a 3-tuple that represents
the individual state of each one of the three units at a given point in time. For
example, the 3-tuple hFU2; FU1;MEM i represents a state where all units are
working, while represents a state where all vector units are idle. The execution
time of a program can thus be split into eight possible states.
Figure
3 presents the splitting of the execution time into states for the ten
benchmark programs. We have plotted the time spent in each state for memory
latencies of 1, 20, 70, and 100 cycles. From this figure we can see that the
number of cycles where the programs proceed at peak floating point speed (states
low. The number of cycles in these
states changes relatively little as the memory latency increases, so the fraction of
JOURNAL OF SUPERCOMPUTING 9
swm25620006000
Execution
cycles
hydro2d10003000
arc2d10003000
nasa710003000 < , >
Execution
cycles
dyfesm5001500
Figure
3. Functional unit usage for the reference architecture. Each bar represents the total
execution time of a program for a given latency. Values on the x-axis represent memory latencies
in cycles.
fully used cycles decreases. Memory latency has a high impact on total execution
time for programs dyfesm, and trfd and flo52, which have relatively small vector
lengths. The effect of memory latency can be seen by noting the increase in cycles
spent in state h ; ; i.
The sum of cycles corresponding to states where the MEM unit is idle is quite high
in all programs. These four states
correspond to cycles where the memory port could potentially be used to fetch
data from memory for future computations. Figure 4 presents the percentage of
these cycles over total execution time. At latency 70, the port idle time ranges
between 30% and 65% of total execution time. All 10 benchmark programs are
memory bound when run on a single port vector machine with two functional units.
Therefore, these unused memory cycles are not the result of a lack of load/store
work to be done.
5. Performance of the DVA
In this section we present the performance of the decoupled vector architecture
versus the reference architecture (REF). We first start by ignoring all latencies of
the functional units inside the processor and concentrate on the study of the effects
of main memory latency (sections 5.1-5.5). This study will determine the most
swm256 hydro2d arc2d flo52 nasa7 su2cor tomcatv bdna trfd dyfesm2060
Idle
Memory
port% 170
Figure
4. Percentage of cycles where the memory port was idle, for 4 different memory latencies.
cost-effective parameters that achieve the highest performance. Then we proceed
to consider the effect of arithmetic functional units and register crossbar latencies
in execution time (section 5.6). We will first show that decoupling tolerates very
well memory latencies and is also useful for tolerating the smaller latencies inside
the processor.
We will start by defining a DVA architecture with infinite queues and no latency
delays -the Unbounded DVA, or UDVA for short- that we will compare to the
reference architecture. Then we will introduce limitations into the UDVA, such as
branch misprediction penalties, limited queue sizes and real functional unit laten-
cies, step by step to see the individual effect of each restriction. After all these steps
we will reach a realistic version of the DVA - the RDVA- that will be compared
against the REF and UDVA machines.
5.1. UDVA versus REF
The Unbounded DVA architecture (UDVA) is a version of the decoupled architecture
that has all of its queues set to a very large value (128 slots) and no latency
delays. Moreover, a perfect branch prediction model is assumed. The I-cache is
not modeled in any of the following experiments, since our previous data indicates
a very low pressure on the I-cache [10]. All arithmetic functional units, both scalar
an vector, have a 1 cycle latency. The vector register file read and write crossbars
have no latency and there is no startup penalty for vector instructions.
The benefits of decoupling can be seen in fig. 5. For each program we plot the
total execution time of the UDVA and the REF architectures when memory latency
is varied between 1 and 100 cycles.
In each graph we also show the minimum absolute execution time that can theoretically
be achieved (curve "IDEAL", along the bottom of each graph). To compute
the IDEAL execution time for a program we use the total number of cycles con-
JOURNAL OF SUPERCOMPUTING 11
swm2565060cycles
x
flo5210cycles
cycles
dyfesm10cycles
REF
UDVA
IDEAL
Figure
5. UDVA versus Reference architecture for the benchmark programs.
sumed by the most heavily used vector unit (FU1, FU2, or MEM). Thus, in IDEAL
we essentially eliminate all data and memory dependences from the program, and
consider performance limited only by the most saturated resource across the entire
execution.
The overall results suggest two important points. First, the DVA architecture
shows a clear speedup over the REF architecture even when memory latency is just
1 cycle. Even if there is no latency in the memory system, the decoupling produces
a similar effect as a prefetching technique, with the advantage that the AP knows
which data has to be loaded (no incorrect prefetches). The second important point
is that the slopes of the execution time curves for the reference and the decoupled
architectures are substantially different. This implies that decoupling tolerates long
memory delay much better than current vector architectures.
Memory latency in cycles1.21.6
hydro2d
arc2d
su2cor
bdna
trfd
dyfesm
Figure
6. Speedup of the DVA over the Reference architecture for the benchmark programs
Overall, decoupling is helping to minimize the number of cycles where the machine
is halted waiting for memory. Recall from section 4 that the execution time of the
program could be partitioned into eight different states. Decoupling greatly reduces
the cycles spent in state h ; ; i.
To summarize the speedups obtained, fig. 6 presents the speedup of the DVA over
the REF architecture for each particular value of memory latency. Speedups (at
latency 100) range from a 1.32 for TOMCATV to a 1.70 for DYFESM.
5.2. Reducing IQ length
The first limitation we introduce into the UDVA is the reduction of the instruction
queues that feed the three computational processors (AP, SP, VP). In this section
we look at the slowdown experienced by the UDVA when the size of the APIQ, SPIQ
and VPIQ queues is reduced from 128 instructions to 32, 16, 8 and 4 instructions
only. In order to reduce the amount of simulation required, we have chosen to fix
the value of the memory latency parameter at 50 cycles. As we have seen in the
previous section, the UDVA tolerates very well a wide range of memory latencies.
Thus we expect this value of 50 cycles to be quite representative of the full 1-100
latency range.
The size of the instruction queues is very important since it gives an upper bound
on the occupation of all the queues in the system. For example, it determines the
maximum number of entries that we can have waiting in the load address queues.
Figure
7 presents the slowdown with respect to the UDVA for our ten benchmarks
when the three instruction queues are reduced to 32, 16, 8 and 4 slots. From fig. 7
we can see that the performance for 128-, 32- and 16-entries instruction queues is
virtually the same for all benchmarks. From these numbers, we decided to set the
IQ length to 16 entries for the rest of experiments presented in this paper. This size
is in line with the typical instruction queues found in current microprocessors [31].
JOURNAL OF SUPERCOMPUTING 13
swm256 hydro2d arc2d flo52 nasa7 su2cor tomcatv bdna trfd dyfesm1.021.06
Figure
7. Slowdown experienced by UDVA when reducing the IQ size.
swm256 hydro2d arc2d flo52 nasa7 su2cor tomcatv bdna trfd dyfesm1.011.03
Slowdown
Figure
8. Slowdown due to branch mispredictions for three models of speculation.
5.3. Effects of branch prediction
In this section we look at the negative effects introduced by branch mispredictions.
The branch prediction mechanism evaluated is a direct-mapped BTB holding for
each entry the branch target address and a 2-bit predictor (the predictor found
in [20]). We augmented the basic BTB mechanism with an 8-deep return stack
(akin to the one found in [2]).
We evaluated the accuracy of the branch predictor for a 64 entries BTB. The
accuracy varies a lot across the set of benchmarks. Programs FLO52 and NASA7
come out with the worst misprediction rate (around 30%) while TOMCATV has
less than 0.4% of mispredicted branches. Nonetheless, the misprediction rate is
rather high for a set of programs that are considered to have an "easy" jumping
pattern (numerical codes tend to be dominated by DO-loops). This is due to the
combination of two facts: first, vectorization has reduced the absolute total number
of branches performed by the programs in an unbalanced way. The number of easily-
predictable loop branches has been diminished by a factor that is proportional to
the vector length (could be as high as 128) while the difficult branches found in
the remaining scalar portion of the code are essentially the same. The second
factor is that we are using a very small BTB compared to what can be found in
current superscalar microprocessors, where a typical BTB could have up to 4096
entries [29].
Although the prediction accuracy is not very good, the impact of mispredicted
branches on total execution time is very small. Figure 8 presents the slowdown
14 ROGER ESPASA AND MATEO VALERO
due to mispredicted branches relative to the performance of the architecture from
section 5.2. Since the prediction accuracy was not very high, we tested the benefit
that could be obtained by being able to speculate across several branches. In
fig. 8 the bars labeled "u=1" correspond to an architecture that only allows one
unresolved branch. Bars labeled "u=2" and "u=3" correspond to being able to
speculate across 2 and 3 branches respectively.
A first observation is that the impact of mispredicted branches is rather low. See
how while FLO52 has a 30% misprediction rate, the total impact of those mispredicted
branches is below 0.5%. A second observation is that while speculating
across multiple branches provides some benefits, specially for DYFESM, its cost is
certainly not justified. The simplicity of only having one outstanding branch to be
resolved is a plus for vector architectures.
All the simulations in the following sections have been performed using a 64-entry
BTB and allowing only 1 unresolved branch.
5.4. Reducing the vector queues length
5.4.1. Vector Load Data Queue This section will look at the usage of the vector
load data queue. The goal is to determine a queue size that achieves almost the same
performance as the 128-slot queue used in the previous sections and yet minimizes
as much as possible hardware costs.
Figure
9 presents the distribution of busy slots in the VLDQ for the benchmark
programs. For each program we plot three distributions corresponding to three
different memory latency values. Each bar in the graphs represents the total number
of cycles that the VLDQ had a certain number of busy slots For example, for trfd
at latency 1, the VLDQ was completely empty (zero busy slots) for around 500
millions of cycles.
From fig. 9 we can see that it is not very common to use more than 6 slots. Except
for swm256 and tomcatv, 6 slots are enough to cover around 85-90% of all cycles.
When latency is increased from 1 cycle to 50 and 100 cycles, the graphs show a
shift of the occupation towards higher number of slots. As an example, consider
programs arc2d, nasa7 and su2cor. For 1 cycle memory latency, these programs
have typically 2-3 busy slots. When latency increases, these three programs show
an increase in the total usage of the VLDQ, and they typically use around 4-5 slots.
As expected, the longer the memory latency, the higher the number of busy slots,
since the memory system has more outstanding requests and, therefore, needs more
slots in the queue.
The execution impact of reducing the VLDQ size can be seen in fig. 10. As
expected from the data seen in fig. 9, reducing the queue size to 16 or 8 slots is not
noticeable for most programs. Going down to 4 slots affects mostly NASA7 and
BDNA but the impact is less than a 1%. Further reducing the VLDQ to 2 slots
would start to hurt performance although not very much. The worst case would be
again for NASA7 with around a 4% impact. As we have already discussed, 2 slots
is clearly a lower bound on the size of the VLDQ to accommodate most memory
JOURNAL OF SUPERCOMPUTING 15
swm256500cycles
arc2d100300500
flo52100200cycles
tomcatv100cycles
dyfesm100300cycles
Figure
9. Busy slots in the VLDQ for the benchmark programs for three different memory latency
values.
bound loops. Reducing that queue to 1 slot would stop most of the decoupling
effect present in the architecture.
Looking at all the data presented in this section we decided to pick a 4 slots
VLDQ. All following sections use this size for the VLDQ.
5.4.2. Vector Store Data Queue The usage of the vector store data queue
presents a very different pattern from the VLDQ. Recall that the AP
always tries to give priority to load operations in front of stores. This has the effect
of putting much more pressure on the VSDQ, which can, at some points, become
full (even with 128 slots!). This situation is not has unusual as it may seem. As
long as the AP encounters no dependencies between a load and a store and as long
swm256 hydro2d arc2d flo52 nasa7 su2cor tomcatv bdna trfd dyfesm1.021.06
Figure
10. Slowdown due to reducing the VLDQ size (relative to section 5.3).
as there are loads to dispatch, no stores will be retrieved from the VSDQ and sent
to memory. Thus the occupation of the VSDQ is much higher than that of the
VLDQ.
Figure
11 presents the distribution of busy slots in the VSDQ for the benchmark
programs. As we did for the load queue, we plot three distributions corresponding
to three different memory latency values. Each bar in the graphs represents the
total number of cycles that the VSDQ had a certain number of busy slots. To make
the plots more clear, we have pruned some of the graphs. Next to the name of
each program we indicate the quartile amount being shown. For example, the full
set of data is shown for nasa7 (q=100%) while the bars in the hydro2d graph only
present 95% of all available data (the rest of the data set was to small to be seen
on the plot). To compensate for this loss of information, each graph also includes
the maximum value that the X axis took for that particular program. Again, for
hydro2d, the graph shows that the maximum occupation of the VSDQ reached 118
slots, although the X-axis of the plot only goes up to 50.
Note that for 6 of the programs, at some point the queue was completely filled
(128 full slots), although the most common occupation ranges between
slots.
For the other 4 programs, the occupation of the queue is bounded. For bdna,
with a maximum occupation of 34 slots in the queue and su2cor (maximum 23),
the bounding is mostly due to their high percentage of spill code. Each time a
vector load tries to recover a vector from the stack previously spilled by a store,
the AP detects a dependency a needs to update the contents of memory draining
the queue. This heavily limits the amount of old stores that are kept in the VSDQ.
Programs trfd and dyfesm are qualitatively different. These two program simply
don't decouple very well. Program dyfesm has a recurrence that forces the three
main processors, the AP, the SP and the VP to work in lock step, thus typically
allowing only a maximum of 1 full slot. Program trfd has at its core a triangular
matrix decomposition. The order in which the matrix is accessed makes each iteration
of the main loop dependent on some of the previous iterations, which causes
a lot of load-store dependencies in the queues. These dependencies are resolved, as
JOURNAL OF SUPERCOMPUTING 17
(q=99.9%)400800cycles
hydro2d (q=95%)400800
arc2d (q=94.2%)1000
cycles
28 78 128
su2cor (q=99.9%)400800
tomcatv (q=100%)50cycles
trfd (q=100%)5000 1
dyfesm (q=99.9%)200600cycles
Figure
11. Busy slots in the VSDQ for the benchmarks for three different memory latency values.
in the case for spill code, by draining the queue and updating memory. Thus, the
queue does never reach a large occupation.
The execution impact of reducing the VSDQ size can be seen in fig. 12. The bars
show that the amount of storage in the VSDQ is not very important to performance.
This is mostly due to the fact that we are in a single-memory port environment. No
matter how we reorder loads and stores between themselves, every single store will
have to be performed anyway. Thus, sending a store to memory at the point where
its data is ready or later does not change much the overall computation rate.
From all the data presented in this section we selected the 4-slot VSDQ for all
following experiments.
swm256 hydro2d arc2d flo52 nasa7 su2cor tomcatv bdna trfd dyfesm1.001.02
Figure
12. Slowdown due to reducing the VSDQ queues size (relative to section 5.4.1).
swm256 hydro2d arc2d flo52 nasa7 su2cor tomcatv bdna trfd dyfesm1.00Slowdown
Figure
13. Slowdown due to reducing the Scalar queues size (relative to section 5.4.2).
5.5. Reducing the scalar queues length
In this section we will look at the impact of reducing the size of the various scalar
queues in the system. Looking back to fig. 2 we will be reducing queues numbered
3-8 and 10-11 from 128 slots down to 16 slots. Queue number 9, the VACQ, VP-
to-AP Control queue, will be reduced from 128 slots to just 1 slot. Note that this
queue holds one full vector register used in gather/scatter operations. The
size is chosen because it is reasonably close to what modern out-of-order superscalar
processors have in their queues [31].
The impact of those reductions can be seen in fig. 13. Overall, using an 8 entry
queue for all the scalar queues is enough to sustain the same performance as the
128 entry queues. Even for small 2 entry queues, the slowdown is around 1.01 for
only 3 programs, dyfesm, bdna and nasa7. Nonetheless, it has to be beard in mind
that our programs are heavily vectorized. A small degradation in performance on
the scalar side is tempered by the small percentage of scalar code present in our
benchmarks. In order to make a safe decision we took a 16 entries queue for all the
scalar queues present in the architecture.
JOURNAL OF SUPERCOMPUTING 19
Table
2. Latency parameters for the
vector and scalar functional units.
Parameters Latency
Scal Vect
(int/fp)
vector startup - 1
read x-bar - 2
add 1/2 6
mul 5/2 7
logic/shift 1/2 4
div 34/9 20
sqrt 34/9 20
5.6. Effects of functional unit latencies
In this section we will look at the effects of latencies inside the computation processors
of our architecture. So far, all models simulated had all of their functional
units using a 1 cycle latency and the vector registers read/write crossbars were
modeled as if it was free to go through them. This section will proceed in three
steps.
First we will add to our architecture the latencies of the vector functional units.
Table
2 shows the values chosen. In a second step, we will add a penalty of 1 cycle
of vector startup for each vector operation. In a third step we will add 2 cycles of
vector read crossbar latency and then we will add 2 cycles of vector write crossbar
latencies. In the last step, we will set the latencies of the scalar units to those also
shown in table 2.
Figure
14 shows as a set of stacked bars, the degradation in performance as each
of the aforementioned effects is added. The bar at the bottom, labeled "vect.
lat" represents the slowdown relative to section 5.5. The following bar, labeled
"startup", is the slowdown with respect to the performance of the "vect. lat" bar.
Similarly for each of the following bars. Thus, the total height of each bar is the
combined slowdown of all these effects.
Figure
14 shows two different behaviors. For seven out of the ten programs, latencies
have a very small impact (below 5%). This is due to the fact that decoupling is
not only good for tolerating memory latencies but, in general, it helps in covering
up the latencies inside the processor. On the other hand, two programs, trfd and
dyfesm show slowdowns as bad as 1.11 and 1.15, respectively.
The behavior of these two programs is not surprising given what we already saw
in section 5.4.1. Both trfd and dyfesm have difficulties in decoupling because of
the inter-iteration dependences of trfd and the recurrences in dyfesm. As we saw,
both of them only achieve a very small occupation of the vector load data queue
which indicates a bad degree of decoupling. If we couple this fact with the
swm256 hydro2d arc2d flo52 nasa7 su2cor tomcatv bdna trfd dyfesm1.05Slowdown scal. lat.
r xbar
startup
vect. lat.
Figure
14. Slowdown due to modeling arithmetic unit latencies and vector pipeline crossbars.
(relative to section 5.5).
relatively low vector lengths of trfd and dyfesm, we see that any cycle added to
the vector dependency graph is typically enlarging the critical path of the program.
Going into the detailed breakdown, Fig. 14 shows that the vector functional unit
latencies have the highest impact of all latencies added in this section. It is worth
noting, though, that the order in which the latencies are added might have some impact
on the relative importance of each individual category. Nonetheless, since the
vector latencies units are the largest of the latencies added and since the programs
are highly vectorized they are the group that most likely will impact performance,
as fig. 14 confirms. The startup penalty is only seen in programs trfd and dyfesm,
where its impact is less than a 1%. The vector register file read/write crossbar
latencies have an impact on all programs, except, again, swm256 and tomcatv.
Typically both latencies have the same amount of impact, being between 0.5 and 1
percentage points for the most vectorized codes and around 2-3 percentage points
in the less vectorized trfd and dyfesm. The scalar latencies have a low impact on
all programs, partly due to them being shorter than the vector ones, partly due to
the small fraction of scalar code and partly due to scalar latencies masked under
other vector latencies.
We decided to compare the impact of functional unit latencies in the reference
machine and in the DVA machine. To do so, we simulate a reference machine with
no latencies at all and a reference machine with the standard latencies and compute
the resulting slowdowns. Then we compare these slowdowns to the slowdowns of
Fig. 14, which we just presented. The result of the comparison can be seen in
Fig. 15. The results show that in all cases the effect of functional unit latencies is
much worse in the in-order reference machine that in the decoupled machine. Since
decoupling introduces some form of dynamic scheduling, it can hide latencies that
were previously in the critical path by performing memory loads in advance.
JOURNAL OF SUPERCOMPUTING 21
swm256 hydro2d arc2d flo52 nasa7 su2cor tomcatv bdna trfd dyfesm1.051.15
Slowdown UDVA
REF
Figure
15. Comparison of functional unit latency impact between the UDVA and REF machines.
5.7. RDVA versus REF
With the data presented in the last section, we have reached a realistic implementation
of the originally proposed UDVA. This realistic version will be referred to
as RDVA and its main parameters are as follows: all instruction queues and scalar
queues are 16 entries long. The address queues in the AP are also entries long.
The latencies used in the functional units and in the read/write crossbars of the
register file are those shown in table 2. The VLDQ and VSDQ each have 4 slots,
and the control queue connecting the VP and the AP has a single slot. The branch
prediction mechanism is a 64 entry BTB with at most 1 unresolved branch being
supported.
In this section we will re-plot a full-scale comparison of UDVA, RDVA and REF
for several latencies. Figure 16 presents the data for the three architectures when
memory latency is varied from 1 cycle to 100 cycles. For almost all programs,
the difference between the UDVA and RDVA is rather small, and their slopes are
relatively parallel. For swm256, the difference is almost 0. For programs hydro2d,
tomcatv, arc2d and su2cor, the slowdown between RDVA and UDVA is less than
(respectively, it is 1.029, 1.031, 1.037 and 1.044). Programs flo52, bdna and
nasa7 have a higher slowdown and, moreover, the slope of the curve for the RDVA
performance starts diverging from the UDVA at high values of latency. Finally,
dyfesm and trfd, as seen in previous sections, take a significant performance hit
when going from the UDVA to the RDVA.
6. Summary and Future Work
In this paper we have described a basic decoupled vector architecture (DVA) that
uses the principles of decoupling to hide most of the memory latency seen by vector
processors.
22 ROGER ESPASA AND MATEO VALERO
swm2565060cycles
x
flo5210cycles
x
cycles
x
dyfesm10cycles
x
REF
RDVA
UDVA
IDEAL
Figure
16. Comparison of REF, UDVA and RDVA execution times for several latencies.
The DVA architecture shows a clear speedup over the REF architecture even when
memory latency is just 1 cycle. This speedup is due to the fact that the AP slips
ahead of the VP and loads data in advance, so that when the VP needs its input
operand they are (almost) always ready in the queues. Even if there is no latency
in the memory system, this "slipping" produces a similar effect as a prefetching
technique, with the advantage that the AP knows which data has to be loaded (no
incorrect prefetches). Thus, the partitioning of the program into separate tasks
helps in exploiting more parallelism between the AP and VP and translates into
an increase in performance, even in the absence of memory latency. Moreover,
as we increase latency, we see how the slopes of the curves of the execution time
of the benchmarks remain fairly stable, whereas the REF architecture is much
more sensitive to the increase in memory latency. When memory latency is set to
JOURNAL OF SUPERCOMPUTING 23
50 cycles, for example, speedups of the RDVA over the REF machine are in the
range 1.18-1.40, and when latency is increased to 100 cycles, speedups go as high
as 1.5.
We have seen that this speed improvements can be implemented with a reasonable
cost/performance tradeoff. Section 5.4 has shown how the length of the queues does
not need to be very large to allow for the decoupling to take place. A vector load
queue of four slots is enough to achieve a high fraction of the maximumperformance
obtainable by an infinite queue. On the other side, the vector store queue does not
need to be very large. Our experiments varying the store queue length indicate
that a store queue of two elements achieves almost the same performance as one
with sixteen slots.
The ability to tolerate very large memory latencies will be critical in future high
performance computers. In order to reduce the costs of high performance SRAM
vector memory systems, they should be turned into SDRAM based memory sys-
tems. This change, unfortunately, can significantly increase memory latency. At
this point is where decoupling can come to rescue. As we have shown, up to 100
cycle latencies can be gracefully tolerated with a performance increase with respect
to a traditional, in-order machine. Moreover, although in this paper we only look
at the single processor case, the decoupling technique would also be very effective
in vector multiprocessors to help reducing the negative effect of conflicts in the
interconnection network and in the memory modules.
The simulation results presented in this paper indicate that vector architectures
can benefit from many of the techniques currently found in superscalar processors.
Here we have applied decoupling, but other alternatives are applying multithreaded
techniques to improve the memory port usage [12] and out-of-order execution together
with register renaming [13]. Currently we are pursuing the latter approach.
--R
Performance Tradeoffs in Multithreaded Processors.
The Perfect Club benchmarks: Effective performance evaluation of supercomput- ers
Organization and architecture tradeoffs in FOM.
A performance study of software and hardware data prefetching strategies.
Functionally parallel architectures for array processors.
CONVEX Architecture Reference Manual (C Series)
Quantitative analysis of vector code.
Dixie: a trace generation system for the C3480.
Instruction level characterization of the Perfect Club programs on a vector computer.
Decoupled vector architectures.
Multithreaded vector architectures.
PIPE: A VLSI Decoupled Architecture.
Optimizing for parallelism and data locality.
Cache performance in vector supercomputers.
Memory Latency Effects in Decoupled Architectures.
Software pipelining: An effective scheduling technique for VLIW machines.
Branch prediction strategies and branch target buffer design.
Value locality and load value prediction.
Vector register design for polycyclic vector scheduling.
Design and evaluation of a compiler algorithm for prefetching.
Explaining the gap between theoretical peak performance and real performance for supercomputer architectures.
Decoupled Access/Execute Computer Architectures.
A Simulation Study of Decoupled Architecture Computers.
Polycyclic vector scheduling vs. chaining on 1-port vector supercomputers
The design of the microarchitecture of UltraSPARC-I
Exploiting choice: Instruction fetch and issue on an implementable simultaneous multithreading processor.
The Mips R10000 Superscalar Microprocessor.
--TR
A simulation study of decoupled architecture computers
The ZS-1 central processor
Software pipelining: an effective scheduling technique for VLIW machines
Polycyclic Vector scheduling vs. Chaining on 1-Port Vector supercomputers
Optimizing for parallelism and data locality
Design and evaluation of a compiler algorithm for prefetching
Designing the TFP Microprocessor
A performance study of software and hardware data prefetching schemes
Cache performance in vector supercomputers
Explaining the gap between theoretical peak performance and real performance for supercomputer architectures
Out-of-order vector architectures
Vector register design for polycyclic vector scheduling
Decoupled access/execute computer architectures
The MIPS R10000 Superscalar Microprocessor
Memory Latency Effects in Decoupled Architectures
Performance Tradeoffs in Multithreaded Processors
Decoupled vector architectures
Multithreaded Vector Architectures
Quantitative analysis of vector code
--CTR
Mostafa I. Soliman , Stanislav G. Sedukhin, Matrix bidiagonalization: implementation and evaluation on the Trident processor, Neural, Parallel & Scientific Computations, v.11 n.4, p.395-422, December
Mostafa I. Soliman , Stanislav G. Sedukhin, Trident: a scalable architecture for scalar, vector, and matrix operations, Australian Computer Science Communications, v.24 n.3, p.91-99, January-February 2002 | vector architectures;decoupling;instruction-level parallelism;memory latency |
604394 | Application-Level Fault Tolerance as a Complement to System-Level Fault Tolerance. | As multiprocessor systems become more complex, their reliability will need to increase as well. In this paper we propose a novel technique which is applicable to a wide variety of distributed real-time systems, especially those exhibiting data parallelism. System-level fault tolerance involves reliability techniques incorporated within the system hardware and software whereas application-level fault tolerance involves reliability techniques incorporated within the application software. We assert that, for high reliability, a combination of system-level fault tolerance and application-level fault tolerance works best. In many systems, application-level fault tolerance can be used to bridge the gap when system-level fault tolerance alone does not provide the required reliability. We exemplify this with the RTHT target tracking benchmark and the ABF beamforming benchmark. | Introduction
In a large distributed real-time system, there is a high likelihood that at any given
time, some part of the system will exhibit faulty behavior. The ability to tolerate
this behavior must be an integral part of a real-time system. Associated with every
real-time application task is a deadline by which all calculations must be completed.
In order to ensure that deadlines are met, even in the presence of failures, fault
tolerance must be employed. In this paper we consider fault tolerance at two
separate levels, system-level and application-level.
System-Level Fault Tolerance encompasses redundancy and recovery actions within
the system hardware and software. While system hardware includes the computing
elements and I/0 (network) sub-system, the system software includes the operating
system and components such as the scheduling and allocation algorithms, check-
pointing, fault detection and recovery algorithms. For example, in the event of a
failed processing unit, the component of the system responsible for fault tolerance
would take care of rescheduling the task(s) which had been executing on the faulty
node, and restarting them on a good node from the previous checkpoint.
Application-Level Fault Tolerance encompasses redundancy and recovery actions
within the application software. Here various tasks of the application may communicate
in order to learn of faults and then provide recovery services, making use
of some data-redundancy. In certain situations, we nd that fault tolerance at the
application-level can greatly augment the overall fault-tolerance of the system. For
example, if a task's checkpoint is very large, application-level fault tolerance can
help mask a fault while the system is moving the large checkpoint and restarting
the task on another node.
N-Modular Redundancy is a well-known fault tolerance technique. A number of
identical copies of the software are run on separate machines, the output from all of
them is compared, and the majority decision is used [1]. This technique however,
involves a large amount of redundancy and is thus costly.
The recovery block approach combines elements of checkpointing and backup
alternatives to provide recovery from hard failures [2]. All tasks are replicated but
only a single copy of each task is active at any time. If a computer hosting an
active copy of a task fails, the backup is executed. The task may be completely
restarted (which increases the chances of a deadline miss) or else executed from
its most recent checkpoint [4]. The later option requires that the active copy of
the task periodically copy (checkpoint) its state to its backups. This can entail a
large amount of overhead, especially when the state information to be transferred
is large. Such is the case with the applications that we are dealing with.
Another common technique is the use of less precise (i.e., approximate) results
[3], obtained by operating on a much smaller data set, using the same algorithm. A
data set can be chosen such that a su-ciently accurate result can be obtained with a
greatly reduced execution time. A smaller data set is chosen either by prioritizing
the data set or by reducing the granularity. Examples of such applications are
target tracking and image processing, where it is better to have less precise results
on time, rather than precise results too late or not at all. Our recovery technique
caters to applications that exhibit data- parallelism, involves a large data set and
can make do with a less precise result for a short period of time.
Our approach makes use of facets of the recovery block technique and employs
reduced precision state information and results in order to tolerate faults. We
employ a certain degree of redundancy within each of the parallel processes. The
application as a whole is able to make use of that redundancy in the event of a
fault to ensure that the required level of reliability is achieved. We consider only
failures that render a process' results erroneous or inaccessible. In the case of such
a fault, the redundant element's less precise results are used instead of those from
the failed process. In this way, our technique can provide a high degree of reliability
with only a small computational overhead in certain applications.
Section 2 introduces the RTHT and ABF benchmarks that will be used to demonstrate
our technique. In Section 3 we describe in detail our application-level fault
tolerance technique. Section 4 analyzes the eectiveness of this technique when used
in conjunction with each of the benchmarks, and Section 5 concludes the paper.
2. The Benchmarks
Each of the benchmarks has the form shown in Figure 1. There are multiple, parallel
application processes, which are fed with input data from a source - in this case, a
source process which simulates a radar system or an array of sonar sensors. When
the parallel computations are complete, the results are output to a sink process,
simulating system display or actuators. Our technique is concerned with the ability
to withstand faults at the parallel processes.
Sink collects the
results.
random noise.
Source generates
input data consisting
of real points and
Application processes perform
computations in order
to track the targets, or form beams.
Process 1
Process 2
Process 3
Process 4
Source Sink
Figure
1. Software architecture of both the RTHT and ABF benchmarks.
2.1. The RTHT Target Tracking Benchmark
The Honeywell Real-Time Multi-Hypothesis Tracking (RTHT) Benchmark [6, 7],
is a general-purpose, parallel, target-tracking benchmark. The purpose of this
benchmark is to track a number of objects moving about in a two-dimensional
coordinate plane, using data from a radar system. The data is noisy, consisting of
false targets and clutter, along with the real targets. The original, non-fault-tolerant
application consists of two or more processes running in parallel, each working on
a distinct subset of the data from the radar. Periodically, frames of data arrive
from the radar, or source process in this case, and are split among the processes for
computation of hypotheses. Each possible track has an associated hypothesis which
includes a gure of likelihood, representing how likely it is to be a real track. A
history of the data points and a covariance matrix are used in generating up-to-date
likelihood values.
For every frame of radar data, each parallel process performs the following steps:
Creation of new hypotheses for each new data point it receives, 2) Extension of
existing hypotheses, making use of the new radar data and the existing covariance
Participation in system-wide compilation or ranking of hypotheses, led
by a Root application process, and 4) Merging of its own list of hypotheses with
the system-wide list that resulted from the compilation step. The deadline of one
frame's calculations is the arrival of the next frame.
By evaluating the performance of the original, non-fault-tolerant, benchmark
when run in conjunction with our RAPIDS real-time system simulator [9], it became
apparent that despite the inherent system-level fault tolerance in the simulated
system, the benchmark still saw a drastic degradation of tracking accuracy as
the result of even a single faulty node. Even if the benchmark task was successfully
reassigned to a good node after the fault, the chances that it had already missed
a deadline were high. This was in part due to the overheads associated both with
moving the large process checkpoint over the network and with restarting such a
large process. Once the process had missed the deadline, it was unable to take part
in the compilation phase and had to start all over again and begin building its hypotheses
anew. This took time, and caused a temporary loss of tracking reliability
of up to ve frames. Although better than a non-fault-tolerant system, in which
that process would simply have been lost, it was still not as reliable as desired.
We decided to address two points, in order to improve the performance of the
benchmark in the presence of faults: 1) The overhead involved with moving such a
large checkpoint and 2) A source of hypotheses for the process to start with after
restart.
Our measure of reliability is the number of real targets successfully tracked by
the application (within a su-cient degree of accuracy) as a fraction of the exact
number of real targets that should have been tracked. To simplify this calculation,
the number of targets is kept constant and no targets enter or leave the system
during the simulation.
2.2. The ABF Beam Forming Benchmark
The Adaptive Beam Forming (ABF) Benchmark [8] is a simulation of the real-time
process by which a submarine sonar system interprets the periodic data received
from a linear array of sensors. In particular, the goal is to distinguish signals from
noise and to precisely identify the direction from which a signal is arriving, across
a specied range of frequencies. In this implementation, the application receives
periodic samples of data as if from the linear sensor array. The data is generated
so that it contains four reference beams, or signals, arriving from distinct locations
in a 180-degree eld of view, along with random noise.
The application itself consists of several application processes, each attempting to
locate beams at a distinct subset of the specied frequency range. Frames of data
for each frequency are \scattered" periodically from the source process. Output,
in the form of one beam pattern per frequency, is \gathered" by the sink process.
Figure
2 depicts a typical beam pattern output, shown here at frame 18, frequency
250Hz, with reference beams at -20, -60, 20 and 60 degrees.
Each application process performs calculations according to the following loop of
pseudo-code, for each frame of input.
for_each ( frequency ) {
Update dynamic weights.
for_each ( direction of arrival) {
Search for signal, blocking out interference
from other directions and frequencies.
Magnitude
Direction of Arrival (Angle) - degrees
Figure
2. Typical beam pattern output.
For each frequency, the process rst updates a set of weights that are dynamically
modied from frame to frame. Applying these weights to the input samples has the
eect of forming a beam which emphasizes the sound arriving from each direction.
The process searches in each possible direction (-90 to 90 degrees) for incoming
signals. The granularity of this direction is directly related to the number of sensors.
In addition, at the start of a run, there is an initialization period in which the
weights are set to some initial values, and then 15 to 20 frames are necessary to
\learn" precisely where the beams are.
It is evident that this sort of application faces reliability problems similar to
those of the RTHT benchmark. If a processing element fails, all output for those
frequencies is lost during the down time, and when the lost task is nally replaced
by the system, it has to go through a startup period all over again. Here, too,
the data sets of these processes are very large, creating a considerable overhead if
checkpointing is employed. To avoid the delay associated with this overhead, be
able to maintain full output during the fault, and provide quick restart after the
fault, application-level fault tolerance must be employed.
We evaluate the quality of the ABF output with two tests applied to the resultant
beam pattern. In the Placement Test we check whether the direction of arrival of
the beam has been detected within a certain tolerance. In the Width Test the aim
is to determine how accurately the beam has been detected by measuring the width
of the beam, in degrees, at 3db down from the peak. A beam that passes both tests
is considered to be correctly detected.
3. Implementation of Application-Level Fault Tolerance
Our technique uses redundancy in the form of extra work done by each process of
the application. Each process takes, in addition to its own distinct workload, some
portion of its neighbor's workload, as shown in Figure 3. The process then tracks
beams or targets for both its own work and overlaps part of its neighbor's, but
makes use of the redundant information only in case this neighbor becomes faulty.
We now explain brie
y how the data set is divided, how the application might learn
of faults, and how it would recover from them.
P3Process 1
Process 2
Process 3
Process 424
(2)
(1)
Frame of data arrives
here, at each node.
Time
Figure
3. Architecture of both benchmarks with application-level fault tolerance.
3.1. Division of Load
The extent of duplication between two neighboring nodes will greatly aect the level
of reliability which can be achieved. Duplication arises from the way we divide the
data set among the parallel processing nodes. First, each frame of data is divided
as evenly as possible among the nodes. The section of the process that takes on
this set of data is the primary task section, P i
. Then we assign each node, n i
some additional work: part of its neighbor, n i 1 's, primary task. The section of
the process that takes on this set of data is the secondary task section, S i
. In other
words,
The primary task section, P i
, refers to the calculations which node n i
carries as
part of the original application.
The secondary task section, S i
, refers to the calculations which node n i
carries
out as a backup for its neighbor, hosts the secondary corresponding
to the primary running on the highest numbered node. The secondary
section, S i
, will be kept in synchronization with the primary P i 1 .
3.2. Detection of Faults
There are two ways in which fault detection information can reach the various
application processes. In the rst, the system informs the application of a faulty
node, and the second is through specic timeouts at the phase of the application
where communication is expected. The former would typically incur the cost of
periodic polling, while the latter could result in late detection of the fault. Although
the exact integration of application-level fault tolerance would vary depending on
the fault detection technique chosen, the eectiveness of our technique should not.
3.3. Fault Recovery
If, at a deadline prior to that of the frame, node n i
is discovered to be faulty and
is unable to output any results, then node n i+1
which is serving as its backup will
send as output S i+1
's data in place of the data that n i
is unable to supply. In
the meantime, the system will be working on replacing or restarting the process
that was interrupted by the fault. In fact, the system's job here is made easier by
the fact that if the process has to be restarted on another node, the process data
segment no longer needs to be moved. When the process is rescheduled, it will
make use of the information maintained by its secondary on its behalf in order to
pick up where it left o before the fault. This way, the application fault tolerance
is able to work in conjunction with the system fault tolerance. This will help even
in the case of transient faults, in that the application-level fault tolerance allows
more leeway to postpone the restarting of the process on another node, in the hope
that the fault will soon disappear.
3.4. Extension to a higher level of redundancy
Our technique guarantees the required reliability in the presence of one fault but
could also withstand two or more simultaneous failures depending on which nodes
are hit by the faults. For example, in a six-node system if the nodes running
processes 1, 3, and 5 fail, the technique would still be able to achieve the required
reliability. Of course, this is contingent on the assumption that the processes on
the faulty nodes are transferred to a safe node and restarted by the beginning of
the next frame.
3.5. Benchmark Integration Specics
We next discuss specic details regarding the application of our technique to each
of the benchmarks.
3.5.1. RTHT benchmark In the RTHT Benchmark, the \unit of redundancy"
is the hypothesis. That is, each secondary task section creates and extends some
fraction of the total number of hypotheses created and extended by the process
for which it is secondary. The amount of secondary redundancy is expressed as a
percentage of the number of hypotheses extended by the primary.
Redundancy is implemented in the following way: At the beginning of each frame,
the source process broadcasts the input radar data, and hypotheses are created and
extended as before, with the exception that additionally the secondary extends a
percentage of those extended by the corresponding primary. The secondary section
is kept in synchronization with primary P i 1 via the compilation process, which
in this case is again a process-level broadcast communication, so that no extra
communication is necessary. If node n i
is discovered to be faulty and is unable to
participate in the compilation of that frame, then node n i+1
which is serving as its
backup will make use of S i+1
's data in the compilation process in place of the data
that n i
is unable to supply.
When the process is rescheduled, it will make use of the hypotheses extended by
the secondary on its behalf so as to pick up where it left o. This information is
obtained from the secondary process by way of compilation - the newly rescheduled
process merely listens in on the compilation process and copies those hypotheses
which have been extended by its secondary.
3.5.2. ABF benchmark There are two ways in which we have integrated application-level
fault tolerance with the ABF Benchmark. They dier in the manner in which
the secondary abbreviates the calculations of the primary so as to obtain a full set
of results. The methods are:
The Limited Field of View (Limited FOV) Method in which the secondary
looks for beams at every frequency as in the primary, however it searches only
a subsection of the primary's eld of view (divided into one or more segments).
Ideally the secondary will place these \windows" at directions in which beams
are known to be arriving. We impose a minimum width of these windows, due
to the fact that if an individual window is too narrow, the output could always
(perhaps erroneously) pass the width-based quality test, described in section 2.
The amount of redundancy is expressed as the percentage of the eld of view
searched by the secondary.
The Reduced Directional Granularity Method in which the secondary looks for
beams at every frequency and in every direction, but with a reduced granularity
of direction. The amount of redundancy is expressed as a percentage of the
original granularity computed by the primary.
Both techniques serve to reduce the computational time of the secondary task set,
while maintaining useful system output. In addition, the two techniques may be
employed concurrently in order to further reduce the computational time required
by the secondary task.
To implement either variation of the technique, the input frame of data is scattered
a second time from the source to the application processes. This is time -
rotated, so that each process receives the input data of the process for which it is
a secondary. Each process rst carries out its primary computational tasks, and
then carries out its secondary task. At the frame's deadline, if a process is detected
to be down, the sink will gather output from the non-faulty processes, including
the backup results from the process that is secondary to the one that is faulty. In
the event of an application process being restarted after a fault, it will receive the
current set of weights from its secondary in order to jump-start its calculations.
Some synchronization between primary and secondary is required in the Limited
FOV Method. It is a small, periodic communication in which either the sink process
or the primary itself tells the secondary at what frequencies and directions it is
detecting beams. Such synchronization is not necessary for the Reduced Granularity
Method.
4. Results
4.1. The RTHT Benchmark
When applied to the RTHT benchmark, we found that only a small amount of
redundancy between the primary and secondary sections is necessary in order to
provide a considerable amount of fault tolerance. Furthermore, the increase in
system resource requirements, even after including overheads of the technique's
implementation, is minimal compared to that of other techniques, in achieving the
same amount of reliability. These points are demonstrated in Figures 4, 5, and
6. Each run contains 30 targets which remain in the system until the end of the
simulation (the 30th frame), as well as some number of false alarms. The case when
only system-level fault tolerance exists corresponds to the case when the secondary
extends 0% of the primary hypotheses.
In
Figure
4 we see the number of targets which are successfully tracked, when we
have just two application processes and a fault occurs at frame 15. (In this case
there were roughly alarms per frame of data.) In this run, 15% redundancy
allows us to track all of the real targets, despite the fault. We can attribute the fact
that a small amount of redundancy can have a great eect on the tracking stability,
Number
of
Targets
Tracked
Frame Number
Secondary extends 15% of primary hyps
Secondary extends 10% of primary hyps
Secondary extends 5% of primary hyps
Secondary extends 0% of primary hyps
Figure
4. Tracking accuracy, in number of real targets tracked for a given percentage of redundancy
to the fact that the hypotheses which are being extended by the secondary are the
ones most likely to be real targets. At the beginning of the compilation phase,
each application process sorts its hypotheses, placing the most likely at the head of
the list for compilation. Thus, at the beginning of the next frame, each application
process and its secondary begin extending those hypotheses with the highest chance
of being real targets.
To rene this point, Figure 5 shows the average percentage of redundancy required
for a given number of application processors and a single fault, as before. The
amount required shows a gradual decrease as we add more processors. We can
attribute this to the fact that the chance of a single process containing a high
percentage of the real targets decreases as processors are added.
In addition, a proportionately small load is imposed on the processor by the
computation of the secondary task set, as seen in Figure 6. This can be attributed
to the fact that a hypothesis whose position and velocity are known precisely, does
not take as much time to extend compared to those hypotheses which are less well-
known. And since the most likely hypotheses are generally the most well-known
and are the hypotheses which the secondary extends, the amount of processor time
taken to execute the secondary task is proportionally much smaller.
Percentage
of
secondary
overlap
required
Number of Application Processors
Figure
5. Average minimum percentage of secondary overlap required to miss no targets despite
one node being faulty.
4.2. ABF Benchmark Results
When we integrate application-level fault tolerance with the ABF benchmark, we
nd that only a small amount of redundancy is necessary to ensure complete masking
of single frame faults. With either variation (reduced granularity or limited
FOV method) we see that a secondary redundancy of 33% is adequate to provide
complete and accurate results in the faulty frame and the following frames (after
the faulty process is restarted). If we combine the two techniques, we see an even
further reduction in the computational eort imposed by the secondary in order
to mask the fault. We have not taken additional network overhead and/or latency
into account in gures of overhead - they refer solely to computational overhead.
Network overhead will depend greatly on the medium used. In particular, a shared
medium would allow the secondary to \snoop" on the primary's input and output,
eliminating the need for additional communication.
All results were obtained by running simulations with 75 sensors and four reference
input beams for 50 frames. There are two application processors, and a fault
occurs in one of them at frame 30. Results are presented and discussed for three
redundancy methods: the Limited FOV method, the Reduced Granularity method
and a Combined method (a combination of the rst two). The quality of the results
is assessed by totalling the number of beams that were tracked successfully. Here,
there are four input beams at each frequency and 32 frequencies { making 128
of
Secondary
Execution
time
to
Primary
Percentage of Secondary Overlap
Figure
6. Ratio of time taken to compute the secondary hypothesis to the time to compute the
primary hypothesis versus the percentage of secondary overlap.
beams in all. As an example, Figure 7 presents the results for several runs of the
ABF benchmark while utilizing the Limited FOV redundancy method alone, with
a single processor fault occurring at frame 30 and lasting one frame. We see that a
30% overlap is adequate to preserve all beam information within the system despite
the loss of one processor in frame 30. We have tabulated the results for all three
methods in Table 1.
4.2.1. ABF Results: Limited FOV Alone As we see in Table 1, roughly 30% secondary
overlap is adequate to provide full masking of the fault. The computational
overhead imposed by the secondary is about 30%. In addition, Figure 8 shows the
rather linear increase in overhead as we increase the fraction of overlap.
Table
1. Amount of secondary overhead imposed by various redundancy methods, each
of which is capable of fully masking a single fault.
Redundancy Technique Secondary Overlap Computational Overhead
Reduced Granularity 33% 35%
Limited FOV 30% 30%
Combined - 30%FOV,50%Granularity 15% 17%
Total
number
of
beams
detected
Time
30% secondary
20% secondary
10% secondary
0% secondary
Figure
7. The number of beams correctly tracked in each frame, for the given levels of redundancy,
for the Limited Field of View Method. A single process experiences a fault of duration one frame,
at frame 30.
Associated with this technique however, is a potential dependence on the number
of beams detected in the system, as described earlier. In order to ensure that the
width test applied to the output can fail, we impose a minimum window-width. This
minimum width dictates that for a given amount of overlap, there is a maximum
number of windows in which the secondary may search for beams. If there are
more beams than the maximum number of windows then some may be missed by
the secondary search, depending on the direction of arrival. However, the system
designer can lessen the likelihood of this occurring by carefully choosing the amount
of overlap allotted, and tuning the criteria with which areas will be searched by the
secondary.
4.2.2. ABF Results: Reduced Granularity Alone Here, too, we see that, according
to
Table
1, operating the secondary at 33% of the granularity of the primary
results in complete masking of the fault, and that this imposes a 35% overhead to
the processing node. Figure 8 again shows a linear relationship between the computational
overhead and the overlap, and indicates that the overhead of the method
itself is a bit higher than that of the Limited FOV method. When considering
the Reduced Granularity method, we see no dependence on the number of beams
detected, although beams could be missed if their peaks were within a few degrees
of each other, and the granularity were very coarse.
of
secondary
exection
time
to
primary
Percentage of Secondary Field of View Overlap
Reduced Granularity method
Limited FOV method
Limited FOV at 50% Granularity
Limited FOV at 33% Granularity
Figure
8. The ratio of secondary to primary execution time for the variations of application-level
tolerance integrated with the ABF Benchmark versus the percentage of secondary eld of
view overlap.
4.2.3. ABF Results: Combined methods When we combine these two techniques,
we see the greatest reduction in computational overhead of the secondary task. As
shown in Table 1, a 30% eld of view combined with a 50% granularity maintains
the tracking ability similar to that of either one alone, yet cuts the computational
overhead nearly in half. This reduction is illustrated in Figure 8, in the lower two
curves, representing the overhead imposed as we vary the eld of view and make
use of 50% and 33% granularity respectively.
5. Conclusions
A high degree of fault tolerance may be obtained with a minimal investment of
system resources in applications exhibiting data parallelism, such as the ABF and
RTHT Benchmarks. It is achieved through a combination of application-level and
system-level fault tolerance. A prioritized ordering within the data set, as in the
RTHT benchmark, or a reduced granularity, as in the ABF benchmark, is made
use of, to decrease the computational overhead of our technique.
The processes in these benchmarks are very large, so that moving a checkpoint
and restarting the task may take a signicant amount of time. The application-level
fault tolerance is able to ensure that, despite the temporary loss of the task, the
required reliability is maintained.
Since the primary and secondary task sets are incorporated within a single application
process, the primary is always executed rst and the secondary next. Once
the primary has completed, it may alert the scheduler, indicating that the secondary
need not be executed. It is useful, but not necessary, for the secondary to still be
executed, as this allows it to be better synchronized with its primary counterpart.
If a fault is detected, the priority of the secondary could be raised, to ensure that
it will complete without missing its deadline, and provide the necessary data for
compilation.
This technique is a substantial improvement over complete system duplication, in
that it does not require 100% system redundancy, but merely adds a small amount
of load to the existing system in achieving the same amount of fault tolerance. It
diers from the recovery block approach in that the secondary does not have to be
cold-started, but is ready for execution when a failure of the primary is detected. In
addition, the level of reliability may be varied by varying the amount of redundancy.
In order to integrate such application-level fault tolerance, the designer will need
to rst determine how to prioritize the data set and/or reduce the granularity in
order to dene the secondary's dataset. Second, the designer should choose mechanisms
by which the secondary gets the input data it needs, is able to output results
when necessary, and is able to communicate with the primary for synchronization
purposes. Naturally, some sort of fault detection will have to used as well. The
designer must carefully weigh the overheads imposed by various methods to achieve
fault tolerance and the quality of results that may be obtained from each.
In conclusion, we believe that steps to integrate this technique into the application
should be taken right from the early stages of the design in order for this approach
to be most eective.
Acknowledgments
This eort was supported in part by the Defense Advanced Research Projects
Agency and the Air Force Research Laboratory, Air Force Materiel Command,
USAF, under agreement number F30602-96-1-0341, order E349. The government is
authorized to reproduce and distribute reprints for Governmental purposes notwithstanding
any copyright annotation thereon.
The views and conclusions contained herein are those of the authors and should
not be interpreted as necessarily representing the o-cial policies or endorsements,
either expressed or implied, of the Defense Advanced Research Projects Agency,
Air Force Research Laboratory, or the U. S. Government.
--R
System Structure for Software Fault Tolerance.
Imprecise Computations.
Using Passive replicates in Delta-4 to Provide Dependable Distributed Computing
A Fault-Tolerant Scheduling Problem
Implementation and Results of Hypothesis Testing from the C 3 I Parallel Benchmark Suite.
Honeywell Technology Center.
RAPIDS: A Simulator Testbed for Distributed Real-Time Systems
--TR
A fault-tolerant scheduling problem
Reliable computer systems (2nd ed.)
Implementation and Results of Hypothesis Testing from the C3I Parallel Benchmark Suite
--CTR
Osman S. Unsal , Israel Koren , C. Mani Krishna, Towards energy-aware software-based fault tolerance in real-time systems, Proceedings of the 2002 international symposium on Low power electronics and design, August 12-14, 2002, Monterey, California, USA | checkpointing;distributed real-time systems;target tracking;imprecise computation;fault tolerance;beam forming |
604398 | High Performance Computations for Large Scale Simulations of Subsurface Multiphase Fluid and Heat Flow. | TOUGH2 is a widely used reservoir simulator for solving subsurface flow related problems such as nuclear waste geologic isolation, environmental remediation of soil and groundwater contamination, and geothermal reservoir engineering. It solves a set of coupled mass and energy balance equations using a finite volume method. This contribution presents the design and analysis of a parallel version of TOUGH2. The parallel implementation first partitions the unstructured computational domain. For each time step, a set of coupled non-linear equations is solved with Newton iteration. In each Newton step, a Jacobian matrix is calculated and an ill-conditioned non-symmetric linear system is solved using a preconditioned iterative solver. Communication is required for convergence tests and data exchange across partitioning borders. Parallel performance results on Cray T3E-900 are presented for two real application problems arising in the Yucca Mountain nuclear waste site study. The execution time is reduced from 7504 seconds on two processors to 126 seconds on 128 processors for a 2D problem involving 52,752 equations. For a larger 3D problem with 293,928 equations the time decreases from 10,055 seconds on 16 processors to 329 seconds on 512 processors. | Introduction
Subsurface flow related problems touch many important areas in to-
day's society, such as natural resource development, nuclear waste underground
storage, environmental remediation of groundwater contami-
nation, and geothermal reservoir engineering. Because of the complexity
of model domains and physical processes involved, numerical simulation
play vital roles in the solutions of these problems.
This contribution presents the design and analysis of a parallel implementation
of the widely used TOUGH2 software package [9, 10] for
numerical simulation of flow and transport in porous and fractured
media. The contribution includes descriptions of algorithms and methods
used in the parallel implementation and performance evaluation
Present address: Department of Computing Science and High Performance
Computing Center North, Ume-a University, SE-901 87 Ume-a, Sweden.
c
2000 Kluwer Academic Publishers. Printed in the Netherlands.
HIGH PERFORMANCE COMPUTATIONS FOR SUBSURFACE SIMULATIONS 3
for parallel simulations with up to 512 processors on a Cray T3E-900
on two real application problems. Although the implementation and
analysis is made on Cray T3E, the use of the standard Fortran 77
programming language and the MPI message passing interface makes
the software portable to any platform where Fortran 77 and MPI are
available.
The serial version of TOUGH2 (Transport Of Unsaturated Ground-water
and Heat version 2) is now being used by over 150 organizations
in more than 20 countries (see [11] for some examples). The major
application areas include geothermal reservoir simulation, environmental
remediation, and nuclear waste isolation. TOUGH2 is one of the
official codes used in the US Department of Energy's civilian nuclear
waste management for the evaluation of the Yucca Mountain site as a
repository for nuclear wastes. In this context arises the largest and most
demanding applications for TOUGH2 so far. Scientists at Lawrence
Berkeley National Laboratory are currently developing a 3D flow model
of the Yucca Mountain site, involving computational grids of 10 5 to
blocks, and related coupled equations of water and gas flow,
heat transfer and radionuclide migration in subsurface [3]. Considerably
larger and more difficult applications are anticipated in the near future,
with the analysis of solute transport, with ever increasing demands on
spatial resolution and a comprehensive description of complex geolog-
ical, physical and chemical processes. High performance capability of
the TOUGH2 code is essential for these applications.
Some early results from this project were presented in [5].
2. The TOUGH2 Simulation
The TOUGH2 simulation package solves mass and energy balance equations
that describe fluid and heat flow in general multiphase, multicomponent
systems. The fundamental balance equations have the following
d
dt
Z
Z
Z
where the integration is over an arbitrary volume V , which is bounded
by the surface S. Here M (k) denotes mass for the k-th component,
(water, gas, heat, etc), F (k) is the flux of fluids and heat through the
surface, and q (k) is source or sink inside V . This is a general form. All
flow and mass parameters can be arbitrary non-linear functions of the
primary thermodynamic variables, such as density, pressure, saturation,
etc.
Given a computational geometry, space is discretized into many
small volume blocks. The integral on each block becomes a variable; this
leads naturally to the finite volume method, resulting in the following
ordinary differential equations:
dt
is the volume of the block n, and Anm is the interface area
bordering between blocks n; m and Fnm is the flow between them.
Note that flow terms usually contain spatial derivatives, which are
replaced by simple difference between variables defined on blocks n; m
and divided by the distances between the block centers. See Figure 1
for an illustration. On the left-hand side, a 3-dimensional grid block is
illustrated with arrows illustrating flow throw interface areas between
neighboring grid blocks. On the left-hand side, two neighboring blocks
m and n are illustrated by a 2-dimensional picture. Here, each block
center is marked by a cross. Included are also the variables Vm and V n
for volumes and Dm and D n for distance between grid block centers
and the interface area.
nm
A
F nm
Figure
1. Space discretization and geometry data.
Time is implicitly discretized as a first order difference equation:
\Deltat
nm
where the vector x (t) consists of prime variables at time t. Flow and
source/sink terms on the right hand side are evaluated at t + \Deltat for
HIGH PERFORMANCE COMPUTATIONS FOR SUBSURFACE SIMULATIONS 5
Initialization and setup
do Time step advance
do Newton iteration
Calculate the Jacobian matrix
Solve linear system
do
do
output
Figure
2. Sketch of main loops for the TOUGH2 simulation.
numerical stability for the multi-phase problems. This lead to coupled
nonlinear algebraic equations, which are solved using Newton's method.
3. Computational Procedure
The main solution procedures can be schematically outlined as in Figure
2.
After reading data and setting up the problem, the time consuming
parts are the main loops for time stepping, Newton iteration, and the
iterative linear solver. At each time step, the nonlinear discretized coupled
algebraic equations are solved with the Newton method. Within
each Newton iteration, the Jacobian matrix is first calculated by numerical
differentiation. The implicit system of linear equations is then
solved using a sparse linear solver with preconditioning. After several
Newton iterations, the convergence is checked by a control parameter,
which measures the maximum component of the residual in the Newton
iterations. If the Newton iterations converge, the time will advance one
more time step, and the process repeats until the pre-defined total time
is reached.
If the Newton procedure does not converge after a preset max-
Newton-iteration, the current time step is reduced (usually by half) and
the Newton procedure is tried for the reduced time step. If converged,
the time will advance; otherwise, time step is further reduced and another
round of Newton iteration follows. This procedure is repeated
until convergence in the Newton iteration is reached.
The system of linear equation is usually very ill-conditioned, and
requires very robust solvers. The dynamically adjusted time step size is
the key to overcome the combination of possible convergence problems
for the Newton iteration and the linear solver. For this highly dynamic
6 ELMROTH, DING AND WU
system, the trajectory is very sensitive to variations in the convergence
parameters.
Computationally, the major part (about 65%) of the execution time
is spent on solving the linear systems, and the second major part (about
30%) is the assembly of the Jacobian matrix.
4. Designing the Parallel Implementation
The aim of this work is to develop a parallel prototype of TOUGH2,
and to demonstrate its ability to efficiently solve problems significantly
larger than problems that have previously been solved using the serial
version of the software. The problems should be larger both in the
number of blocks and the number of equations per block. The target
computer system for this prototype version of the parallel TOUGH2
is the 696 processor Cray T3E-900 at NERSC, Lawrence Berkeley
National Laboratory.
In the following sections, we give an overview of the design of the
main steps, including grid partitioning, grid block reordering, assembly
of the Jacobian matrix, and solving the linear system, as well as some
further details about the parallel implementation.
4.1. Grid Partitioning and Grid Block Reordering
Given a finite domain as described in Section 2, we will in the following
consider the dual mesh (or grid), obtained by representing each block
(or volume element) by its centroid and by representing the interfaces
between blocks by connections. (The words blocks and connections are
used in consistency with the original TOUGH2 documentation [10].)
The physical properties for blocks and their interfaces are represented
by data associated with blocks and connections, respectively.
In TOUGH2 the computational domain is defined by the set of all
connections given as input data. From this information, an adjacency
matrix is constructed, i.e., a matrix with a non-zero entry for each
element (i; j) where there is a connection between blocks i and j. In
the current implementation the value 1 is always used for non-zero
elements, but different weights may be used. The adjacency matrix is
stored in a compressed row format, called CRS format, which is a slight
modification of the Harwell-Boeing format. See, e.g., [2] for descriptions
of CRS and Harwell-Boeing formats.
The actual partitioning of the grid into p almost equal-sized parts is
performed using three different partitioning algorithms, implemented
in the METIS software package version 4.0 [8]. The three algorithms are
here denoted the K-way, the VK-way and the Recursive partitioning
algorithm, in consistency with the METIS documentation.
K-way is multilevel version of a traditional graph partitioning algorithm
that minimizes the number of edges that straddle the partitions.
VK-way is a modification of K-way that instead minimizes the actual
total communication volume. Recursive is a recursive bisection
algorithm which objective is to minimize the number of edges cut.
After partitioning the grid on the processors, the blocks (or more
specifically, the vector elements and matrix rows associated with the
blocks) are reordered by each processor to a local ordering. The blocks
for which a processor computes the results are denoted the update set
of that processor. The update set can be further partitioned into the
internal set and the border set. The border set consists of blocks with
an edge to a block assigned to another processor and the internal set
consists of all other blocks in the update set. Blocks not included in
the update set but needed (read only) during the computations defines
the external set.
Figure
3 illustrates how the blocks can be distributed over the pro-
cessors. (The vertices of the graph represent blocks and the edges
represent connections, i.e., interface areas between pairs of blocks.)
Table
I shows how the blocks are classified in the update and the
external sets and how the update sets are further divided into internal
and border sets. In the table, the elements are placed in local order and
the global numbering illustrates the reordering.11314135
Processor 0
Processor 2Processor 11
Figure
3. A grid partitioning on 3 processors.
Table
I. Example of block distribution and local
ordering for the Internal, Border, and External
sets.
(Internal
Processor 0: (7;
Processor 1: (2; 3 j
Processor 2: (6; 14 j 5; 10; 13 k12; 4;
In order to facilitate the communication of elements corresponding
to border/external blocks, the local renumbering of the nodes is made
in a particular way. All blocks in the update set precede the blocks
in the external set, and in the update set, all internal blocks precede
the border blocks. Finally, the external blocks are ordered internally
with blocks assigned to a specific processor placed consecutively. One
possible ordering is given as an example in Table I.
For processor 0 in this example, the grid blocks numbered 7 and 11
are internal blocks, i.e., these blocks are updated by processor 0 and
there are no dependencies between these blocks and blocks assigned
to other processors. The grid blocks 8 and 12 are border blocks for
processor 0, i.e., the blocks are updated by processor 0 but there are
dependencies to blocks assigned to other processors. Finally, blocks 1,
and 13 are external blocks for processor 0, i.e., these blocks are
not updated by processor 0 but data associated with these blocks are
needed read-only during the computations. The amount of data that
a processor is to send and receive during the computations are approximately
proportional to the number of border and external blocks,
respectively.
The consecutive ordering of the external blocks that reside on each
processor makes it possible to receive data corresponding to these
blocks into appropriate vectors without use of buffers and with no need
for further reordering, provided that the sending processor has access
to the ordering information. However, it is not possible in general to
order the border blocks so that transformations can be avoided when
sending, basically because some blocks in the border set may have to
be sent to more than one processor.
4.2. Jacobian Matrix Calculations
A new Jacobian matrix is calculated once for each Newton step, i.e.,
several times for each Time step of the algorithm. In the parallel al-
gorithm, each processor is responsible for computing the rows of the
HIGH PERFORMANCE COMPUTATIONS FOR SUBSURFACE SIMULATIONS 9
Jacobian matrix that correspond to blocks in the processor's update
set. All derivatives are computed numerically.
The Jacobian matrix is stored in the Distributed Variable Block
Row format (DVBR) [7]. All matrix blocks are stored row wise, with
the diagonal blocks stored first in each block row. The scalar elements
of each matrix block are stored in column major order. The use of
dense matrix blocks enable use of dense linear algebra software, e.g.,
optimized level 2 (and level subproblems. The DVBR
format also allows for a variable number of equations per block.
Computation of the elements in the Jacobian matrix is basically performed
in two phases. The first phase consists of computations relating
to individual blocks. At the beginning of this phase, each processor
already holds the information necessary to perform these calculations.
The second phase include all computations relating to interface quan-
tities, i.e., calculations using variables corresponding to pairs of blocks.
Before performing these computations, exchange of relevant variables
is required. For a number of variables, each processor sends elements
corresponding to border blocks to appropriate processors, and it receives
elements corresponding to external blocks.
4.3. Linear Systems
The non-symmetric linear systems to be solved are generally very ill-conditioned
and difficult to solve. Therefore, the parallel implementation
of TOUGH2 is made so that different iterative solvers and preconditioners
easily can be tested. All results presented here have been obtained
using the stabilized bi-conjugate gradient method (BICGSTAB)
[14] in the Aztec software package [7], with 3 \Theta 3 Block Jacobi scaling
and a domain decomposition based preconditioner with possibly overlapping
subdomains, i.e., Additive Schwarz (see, e.g., [13]), using the
ILUT [12] incomplete LU factorization.
The domain decomposition based procedure can be performed with
different levels of overlapping, and for the case the procedure
turns into another variant of Block Jacobi preconditioner. In order
to distinguish the 3 \Theta 3 Block Jacobi scaling from the full subdomain
Block Jacobi scaling obtained by chosing in the domain
decomposition preconditioning procedure, we will refer to the former
as the Block Jacobi scaling and the latter as the domain decomposition
based preconditioner, though both are of course preconditioners.
As an illustration of the difficulties arising in these linear systems,
we would like to mention a very small problem from the Yucca Mountain
simulations mentioned in the Introduction. This non-symmetric
problem includes 45 blocks, 3 equations per block, and 64 connections.
When solving the linear system, the Jacobian matrix is of size 135 \Theta 135
with 1557 non-zero elements. For the first Jacobian generated (in the
first Newton step of the first Time step), i.e., the matrix involved in
the first linear system to be solved, the largest and smallest singular
values are 2:48\Theta10 32 and 2:27\Theta10 \Gamma12 , respectively, giving the condition
number 1:1 \Theta 10 44 .
By applying block Jacobi scaling, where each block row is multiplied
by the inverse of its 3 \Theta 3 diagonal block, the condition number is
significantly reduced. The scaling reduces the largest singular value
to 7:69 \Theta 10 3 and the smallest is increased to 9:83 \Theta 10 \Gamma5 , altogether
reducing the condition number to 7:8 \Theta 10 7 . This is, however, still an
ill-conditioned problem. Therefore, the domain decomposition based
preconditioner with incomplete LU factorization mentioned above is
applied after the block Jacobi scaling. This procedure has shown to
be absolutely vital for convergence on problems that are significantly
larger.
4.4. Parallel Implementation
In this section, we outline the parallel implementation by describing the
major steps in some important routines. In all, the parallel TOUGH2
includes about 20,000 lines of Fortran code (excluding the METIS and
Aztec packages) in numerous subroutines using MPI for message passing
[6]. However, in order to understand the main issues in the parallel
implementation, it is sufficient to focus on a couple of routines. Of
course, several other routines are also modified compared to the serial
version of the software, but these details would only be distracting.
Cycit
Initially, processor 0 reads all data describing the problem to be solved,
essentially in the same way as in the serial version of the software. Then,
all processors call the routine Cycit which contains the main loops for
time stepping and Newton iterations. This routine also initiates the
grid partitioning and data distribution. The partitioning described in
Section 4.1 defines how the input data should be distributed on the
processors. The distribution is performed in several routines called from
Cycit.
There are five categories of data to be distributed and possibly
reordered. Vectors with elements corresponding to grid blocks are distributed
according to the grid partitioning and reordered to the local
order with Internal, Border, and External elements as described in
Section 4.1. Vectors with elements corresponding to connections are
distributed and adjusted to the local grid block numbering after each
HIGH PERFORMANCE COMPUTATIONS FOR SUBSURFACE SIMULATIONS 11
processor have determined which connections are involved in its own local
partition. Vectors with elements corresponding to sinks and sources
are replicated in full before each processor extracts and reorders the
parts needed. There are in addition a number of scalars and small
vectors and matrices that are fully replicated, i.e., data structures
which sizes do not depend on the number of grid blocks or connections.
Finally, processor 0 constructs the data structure for storing the Jacobian
matrix and distributes appropriate parts to the other processors.
This include all integer vectors defining the matrix structure but not
the large array for holding the floating point numbers for the matrix
elements.
As the problem is distributed, the time stepping procedure begins.
A very brief description of the routine Cycit is given in Figure 4. In this
description, lots of details have been omitted for clarity, and calls have
been included to a couple of routines that require further description.
ExchangeExternal
The routine ExchangeExternal is of particular interest for the parallel
implementation. The main loop of this routine is outlined in Figure 5.
When called by all processors with a vector and a scalar noel as argu-
ments, an exchange of vector elements corresponding to external grid
blocks is performed between all neighboring processors. The parameter
noel is the number of vector elements exchanged per external grid block.
Some additional parameters that defines the current partition, e.g.,
information about neighbors etc, need also to be passed to the routine,
but we have for clarity chosen not to include them in the figure. Though
some details are omitted, we have chosen to include the full MPI syntax
(using Fortran interface) for the communication primitives. The routine
pack, called by ExchangeExternal, copies appropriate elements from
vector into a consecutive work array. The external elements for a given
processor are specified by sendindex.
We remark that the elements can be stored directly into the appropriate
vector when received (since external blocks are ordered consecutively
for each neighbor), whereas the border elements to be sent need
to be packed into a consecutive work space before they are sent.
Note that we use the nonblocking MPI routines for sending and
receiving data. With use of blocking routines we would have had to
assure that all messages are sent and received in an appropriate order
to avoid deadlock. When using nonblocking primitives, the sends and
receives can be made in arbitrary order. A minor inconvenience with
use of the nonblocking routines is that the work space used to store
elements to be sent need to be large enough to store all elements a
processor is to send to all its neighbors.
Cycit(.)
Initialization, grid partitioning, data distribution etc
Set up first time step and the first Newton step
number of secondary variables (in PAR) per grid block
while Time ! EndTime
while not Newton converged
call Multi(.)
Newton converged = result from convergence test
if Newton converged
Update primary variables
Increment Time, define new time step and set
else
if Iter - MaxIter
Solve linear system
call Eos(.)
call ExchangeExternal(PAR, num sec vars)
if Iter ? MaxIter or Physical properties out of range
time step has been decreased too many times
Stop execution
Print message about failure to solve problem
else
Reduce time step
call Eos(.)
call ExchangeExternal(PAR, num sec vars)
while
while
Figure
4. Outline of the routine Cycit, executed by all processors.
ExchangeExternal(vector, noel)
do neighbors
call MPI IRECV(vector(recvstart), rlen,
MPI DOUBLE PRECISION, proc, tag+myid,
MPI COMM WORLD, req(2*i-1), ierr)
call pack(i, vector, sendindex, slen, work(iw), noel)
call MPI ISEND(work(iw), slen,
MPI DOUBLE PRECISION, proc, tag+proc,
MPI COMM WORLD, req(2*i), ierr)
do
call MPI WAITALL(2*num neighbors, req, stat, ierr)
Figure
5. Outline of the routine ExchangeExternal. When simultaneously called by
all processors it performs an exchange of noel elements per external grid block for
the data in vector.
The routine Multi is called to set up the linear system, i.e., the main
part of the computations in Multi is for computing the elements of
the Jacobian matrix. Computationally, Multi performs three major
steps. First it performs all computations that depend on individual
grid blocks. This is followed by computations of terms arising from
sinks and sources.
So far all computations can be made independently by all processors.
The last computational step in Multi is for interface quantities, i.e.,
computations involving pairs of grid blocks. Before performing this last
step, and exchange of external variables is required for the vectors X
(primary variables), DX (the last increments in the Newton process),
DELX (small increments of the X values, used to calculate incremental
parameters needed for the numerical calculation of the derivatives),
14 ELMROTH, DING AND WU
and R (the residual). The number of elements to be sent per external
grid block equals the number of equations per grid block, for all four
vectors. This operation is performed by calling ExchangeExternal before
performing the computations involving interface quantities.
Eos3 and other Eos routines
The thermophysical properties of fluid mixtures needed in assembling
the governing mass and energy balance equations are provided by a
routine called Eos (Equations of state). The main task for the Eos
routine is to provide values for all secondary (thermophysical) variables
as functions of the primary variables, though it also performs some
additional important tasks (see [10], pages 17-26 for details).
Several Eos routines are available for TOUGH2, and new Eos routines
will become available. However, Eos3 is the only one that have
been used in this parallel implementation. In order to provide maximum
flexibility, we strive to minimize the number of changes that needs
to be done to the Eos routine when moving from the serial to the
parallel implementation. This have been done by organizing data and
assigning appropriate values to certain variables before calling the Eos
routine. In the current parallel implementation, the Eos3 routine from
the serial code can be used unmodified, with the exception of some
statements. Though, this still needs to be verified in practice, we
believe that the current parallel version of TOUGH2 can handle also
other Eos routines, with the only exception being some write statements
needing adjustments.
4.5. Cray T3E-The Target Parallel System
The parallel implementation of TOUGH2 is made portable through use
of the standard Fortran 77 programming language and the MPI Message
Passing Interface for interprocessor communication. The development
and analysis, however, have been performed on a 696 processor
Cray T3E-900 system.
The T3E is a distributed memory computer, each processor has its
own local memory. Together with some network interface hardware, the
processor (known as Digital EV-5 or Alpha) and local memory form a
Processing Element (PE), is sometimes called a node. All 696 PEs are
connected by a network arranged in a 3-dimensional torus. See, e.g., [1]
for details about the performance of the Cray T3E system.
5. Performance Analysis
Parallel performance evaluation have been performed for a 2D and a 3D
real application problem arising in the Yucca Mountain nuclear waste
site study. Results have been obtained for up to 512 processors of the
Cray T3E-900 at NERSC, Lawrence Berkeley National Laboratory.
The linear systems have been solved using BICGSTAB with 3 \Theta 3
Block Jacobi scaling and a domain decomposition based preconditioner
with the ILUT incomplete LU factorization. Different levels of overlapping
have been tried for this procedure, though all results presented
are for non-overlapping tests, which in general have shown to give
good performance. The stopping criteria used for the linear solver is
denote the residual and the right hand side,
respectively.
Both test problems require simulated times of 10 4 to 10 5 years,
which would require a significant execution time also with good parallel
performance and a large number of processors. In order to investigate
the parallel performance, we have therefore limited the simulated time
to 10 years for the 2D problem and 0:1 year for the 3D problem,
which still require enough time steps to perform the analysis of the
parallel performance. A shorter simulated time will of course give the
initialization phase unproportionally large impact on the performance
figures. The initialization phase is therefore excluded from the timings.
Tests have been performed using the K-way, the VK-way, and the
Recursive partitioning algorithms in METIS. As we will see later, different
orderings of the grid blocks lead to variations in the time discretization
following from the unstructured nature of the problem. This
in turn lead to variations in the number of time steps required and
thereby in the total amount of work performed. By trying all three
partitioning algorithms and chosing the one that leads to the best performance
for each problem and number of processors, we reduce these
somewhat "artificial" performance variations resulting from differences
in the number of time steps required. For all results presented, we
indicate which partitioning algorithm have been used.
5.1. Results for 2D and 3D Real Application Problems
The 2D problem consists of 17,584 blocks, 3 components per block and
43,815 connections between blocks, giving in total 52,752 equations.
The Jacobian matrix in the linear systems to be solved for each Newton
step is of size 52,752 \Theta 52,752 with 946,926 non-zero elements.
The topmost graph in Figure 6 illustrates the reduction in execution
time for increasing number of processors. The execution time is
Number of processors
Execution
time
2h 5m 4s
2m 6s
Number of processors
Figure
6. Execution time and parallel speedup on the 2D problem for 2, 4, 8, 16,
32, 64, and 128 processors on the Cray T3E-900.
HIGH PERFORMANCE COMPUTATIONS FOR SUBSURFACE SIMULATIONS 17
reduced from 7504 seconds (i.e., 2 hours, 5 minutes, and 4 seconds) on
two processors to 126 seconds (i.e., 2 minutes and 6 seconds) on 128
processors.
The parallel speedup for the 2D problem is presented in the second
graph of Figure 6. Since the problem cannot be solved on one processor
with the parallel code the speedup is normalized to be 2 on two pro-
cessors, i.e., the speedup on p processors is calculated as 2T 2 =T p , where
denote the wall clock execution time on 2 and p processors,
respectively. For completeness we also report that the execution time
for the original serial code is 8245 seconds on the 2D problem.
The 3D problem consists of 97,976 blocks, 3 components per block
and 396,770 connections between blocks, giving in total 293,928 equa-
tions. The Jacobian matrix in the linear systems to be solved for
each Newton step is of size 293,928 \Theta 293,928 with 8,023,644 non-zero
elements.
The topmost graph in Figure 7 illustrates the reduction in execution
time for the 3D problem for increasing number of processors. Memory
and batch system time limits prohibits tests on less than 16 processor.
Results are therefore presented for 16, 32, 64, 128, 256, and 512 pro-
cessors. The execution time is significantly reduced as the number of
processors is increased, all the way up to 512 processors. It is reduced
from 10055 seconds (i.e., 2 hours, 47 minutes, and 35 seconds) on
processors to 329 seconds (i.e., 5 minutes and 29 seconds) on 512
processors. The ability to efficiently use larger number of processors is
even better illustrated by the speedup shown in the second graph of the
Figure
7. The speedup is defined as 16T 16 =T p since performance result
are not available for smaller number of processors.
The results clearly demonstrate very good parallel performance up
to very large number of processors for both problems. We observe
speedups up to 119.1 on 128 processors for the 2D problem and up
to 489.3 on 512 processors for the 3D problem.
When repeatedly doubling the number of processors from 2 to 4,
from 4 to 8, etc, up to 128 processors for the 2D problem, we obtain
the speedup factors 1.58, 2.85, 2.19, 1.91, 1.96, and 1.62. For the 3D
problem, the corresponding speedup factors when repeatedly doubling
the number of processors from 16 to 512 processors are 2.70, 2.28, 1.95,
1.50, and 1.69.
As 2.00 would be the ideal speedup each time the number of processors
is doubled, the speedup, e.g., 2.70 and 2.28 for the 3D problem
are often called super-linear speedup. We will present the explanations
for this in later sections.
Number of processors
Execution
time
2h 47m 35s
5m 29s
Number of processors
Figure
7. Execution time and parallel speedup on 3D problem for 16, 32, 64, 128,
256 and 512 processors on the Cray T3E-900.
HIGH PERFORMANCE COMPUTATIONS FOR SUBSURFACE SIMULATIONS 19
Overall the parallel performance is very satisfactory, and we complete
this analysis by providing some insights and explaining the super-linear
speedup.
5.2. An Unstructured Problem
In the ideal case, the problem can be evenly divided among the processors
not only with approximately the same number of internal grid
blocks per processor, but also roughly the same number of external
blocks per processor. Our problems, however, are very unstructured,
which means that the partitioning can not be made even in these both
aspects.
This leads, for example, to imbalances between the number of external
elements per processor when the internal blocks are evenly dis-
tributed. For the 3D problem on 512 processors, the average number
of external grid blocks is 234 but the maximum number of external
blocks for any processor is 374. It follows that at least one processor
will have 60% higher communication volume than the average processor
(assuming the communication volume to be proportional to the number
of external blocks). Note here that the average number of internal grid
blocks is 191 for the same case. This means that the average processor
actually has more external blocks that internal blocks. Finally, the
average number of neighboring processors is 12.59 and the maximum
number of neighbors for any processor is 25.
Altogether this indicates that the communication pattern is irregular
and that the amount of communication is becoming significant both in
terms of number of messages and total communication volume. At the
same time, the amount of computations that can be performed without
external elements is becoming fairly small.
Despite of these difficulties, the parallel implementation shows ability
to efficiently use a large number of processors: every half second wall
clock time, on 512 processors, a new linear system of size 293,928 \Theta
293,928 with 8,023,644 non-zero elements is generated and solved. This
includes the time for the numerical differentiation for all elements of the
Jacobian matrix, the 3 \Theta 3 Block Jacobi scaling for each block row, the
ILUT factorization for the domain decomposition based preconditioner,
and a number of BICGSTAB iterations.
5.3. Analysis of Work Load Variations
Several issues need to be considered when analyzing the performance as
the number of processors is increased. First, the sizes of the individual
tasks to be performed by the different processors is decreased, giving an
increased communication to computation ratio, and the relative load
imbalance is also likely to increase. In addition, we may find variations
in how the time discretization is performed (the number of time steps)
and the number of iterations in the Newton process and the linear
solver. In order to conduct a more detailed study, we present a summary
of iteration counts and timings for the two test problems in Table II.
The table shows the average number of Newton iterations per time
step, and the average number of iterations in the linear solver per time
step and per Newton step, as well as the total number of time steps,
Newton iterations, and iterations in the linear solver. We recall that the
linear system solve is the most time consuming operation and the computation
of the Jacobian matrix is the second largest time consumer.
Both of these operations are performed once for each Newton step.
For both problems we note that some variations occur in the time
discretization when the problem is solved on different number of pro-
cessors. Similar behavior has been observed, for example, when using
different linear solvers in the serial version of TOUGH2. The variations
in time discretization leads to variations both in the number of time
steps needed and the number of Newton iterations required. Notably,
the 4 processors execution on the 2D problem requires 15% more time
steps, 35% more Newton steps, and 91% more iterations in the linear
solver compared to the execution on 2 processors. This increase of work
fully explains the low speedup on 4 processors. Similar variations in the
amount of work also contribute to a very good speedup for some cases.
However, the figures in Table II alone do not fully explain the super-linear
speedup observed for some cases. We will therefore continue our
study by looking at the performance of the linear solver. Before doing
that, however, we show some examples that motivates this continued
study, i.e., cases where the speedup actually is higher than we would
expect from looking at iteration counts only.
For example, on the 2D problem the speedup on 8 processors is
12:4% larger than maximum expected (i.e., 8.99 vs. 8.00), but compared
to the execution on two processors, the 8 processor execution actually
requires slightly more Newton iterations and iterations in the linear
solver. The number of time steps is the same for both tests. When
doubling the number of processors from 8 to 16, we see another factor
of 2.19 in speedup, even though the reduction in number of Newton
iterations and iterations in the linear solver is only 2.4% and 9.6%,
respectively. The speedup on the 3D problem from 16 to processors
(2.70) and from 32 to 64 processors (2.28) is also higher than what
would be expected by looking at Table II alone.
So far, we can summarize the following observations for the two
problems. The unstructured nature of the problem naturally leads to
variations in the work load between different tests. This alone explains
Table
II. Iteration counts and execution times for the 2D and 3D test problems.
2D problem
Partitioning algorithm VK VK K Rec. Rec. Rec. K Rec.
#Time steps 104 120 104 104 104 94 94 103
Total #Newton iterations 645 869 669 653 663 697 620 637
#Newton iter./Time step 6.20 7.24 6.43 6.28 6.38 7.41 6.60 6.18
Total #Lin. solv. iterations 8640 16528 10934 9888 11011 11282 11894 19585
#Lin. solv. iter./Newton step 13.40 19.02 16.34 15.14 16.61 18.46 19.18 30.75
#Lin. solv. iter./Time step 83.1 137.1 105.1 95.1 105.9 120.0 126.5 190.1
Time spent on Lin. solv.
Time spent on other
Total time
3D problem
Partitioning algorithm K Rec. K K Rec. Rec.
#Time steps 154 149 143 137 185 166
Total #Newton iterations 632 606 585 561 708 646
#Newton iter./Time step 4.10 4.07 4.09 4.09 3.83 3.89
Total #Lin. solv. iterations 8720 10275 9357 10362 14244 14487
#Lin. solv. iter./Newton step 13.80 16.96 15.99 18.47 20.12 22.43
#Lin. solv. iter./Time step 56.6 69.0 65.4 75.6 77.0 87.3
Time spent in Lin. solv.
Time spent on other
Total execution time
22 ELMROTH, DING AND WU
some of the speedup anomalies observed, but for a couple of cases, it is
evident that are other issues to be investigated. We therefore continue
this study by focusing on the performance of the linear solver and the
preconditioner.
5.4. Performance of Preconditioner and Linear Solver
A breakup of the speedup in one part for the linear solver (including
preconditioner) and one for all other computations (mainly assembly of
the Jacobian matrix) is presented for both problems in Figure 8. The
figure illustrates that the super-linear speedup for the whole problem
follows from super-linear speedup of the linear solver. Note that the
results presented are for the total time spent on these parts, i.e., a
different number of linear systems to be solved or a difference in the
number of iterations required to solve a linear system affects these
numbers.
The speedup of the "other parts" is close to p for all tests on both
problems, and this is also an indication that this part of the computation
may show good performance also for larger number of processors.
The slight decrease on 256 and 512 processors for the 3D problem is
due to increased number of time steps.
We conclude that the performance of "the other parts" is satisfying
and that it needs no further explanations. We continue with the study
of the super-linear speedup of the linear solver.
5.4.1. Effectiveness of the Preconditioner
The preconditioner is crucial to the number of iterations per linear
system solved. The domain decomposition based process is expected to
become less efficient as the number of processors increases. The best
effect of the preconditioner is expected when the whole matrix is used in
the factorization, but in order to achieve good parallel performance, the
size for the preconditioning operation on each processor is restricted to
its local subdomain. On average, the matrix used in the preconditioning
by each processor is n
is the size of the whole (global)
matrix and p is the number of processors. The reduced effectiveness
follows naturally from the smaller subdomains, i.e., the decreased size
of the matrices used in the preconditioner, since only diagonal blocks
are used to calculate an approximate solution.
The number of iterations required per linear system for the two test
problems confirms this theory (see Table II). For both test problems
the number of iterations required per linear system increases with the
number of processors (with exceptions for going from 4 to 8 and 8 to
processors for the 2D problem and 32 to 64 for the 3D problem).
Number of processors
Linear solver
Other parts
Number of processors
Linear solver
Other parts
Figure
8. Breakup of speedup for the 2D and 3D problems in one part for the linear
solver (marked "r") and one part for all other computations (marked "4"). The
ideal speedup is defined by the straight line.
We have also included the results for 256 processors on the 2D
problem in Table II. We note an increase in the number of iterations
per linear system by more than 50% compared to 128 processors. It is
clear that the preconditioner does not perform a very good job when
the number of processors is increased to 256. By introducing one level
of overlapping (Additive Schwarz) in the domain decomposition based
preconditioner on 256 processors, the number of iterations in the linear
solver is reduced to the same order as for smaller number of proces-
sors. This is however done at the additional cost for performing the
overlapping and the overall time is roughly unchanged.
Our main observation is that despite the overall increasing number
of iterations in the linear solver for increasing number of processors,
the speedup of the linear solver is higher than what would normally be
expected up to a certain number of processors. The increased number
of iterations per linear system for larger number of processors is obviously
following from the reduced effectiveness of the preconditioner.
In the following sections, we will conclude the performance analysis
by investigating the parallel performance of the actual computations
performed during the preconditioning and linear iteration processes.
5.4.2. Performance of the Preconditioner
Another effect of the decreased sizes of the subdomains in the domain
decomposition based preconditioner is that the total amount of work to
perform the incomplete LU factorizations becomes significantly smaller
as the number of processors is increased.
For example, as the number of processor is doubled, the size of each
processor's local matrix in the ILUT factorization is decreased by a
factor of 4, on average from n
p to n
2p \Theta n
2p . Hence will the amount
of work per processor be reduced a factor between 2 and 8 depending
on the sparsity structure. Hence, the amount of work per processor
is reduced faster than we normally expect when we assume the ideal
speedup to be 2.
Figure
9 gives a further breakup of the speedup, now with the
speedup for the linear solver separated into one part for the preconditioner
(i.e., the ILUT factorization) and one for the other parts of
the linear solver.
The preconditioner shows a dramatic improvement of the performance
as the number of processors increases, following naturally from
the decreased work in the ILUT factorization. As we continue, this will
turn out to be the single most important explanation for the sometimes
super-linear speedup.
Number of processors
Other parts
Preconditioner
Linear solver
excluding
preconditioner
Number of processors
Other parts
Preconditioner
Linear solver
excluding
preconditioner
Figure
9. Breakup of speedup for the 2D and 3D problems in one part for the linear
solver excluding the ILUT factorization for the preconditioner (marked "r"), one
part for ILUT factorization (marked "*") and one part for all other computations
(marked "4"). The ideal speedup is defined by the straight line.
26 ELMROTH, DING AND WU
5.4.3. Performance of the Linear Iterations
We have explained the super-linear speedup of the preconditioning part
of the linear solver, presented in Figure 9, but we also observe only a
modest speedup of the other parts of the computation.
The low speedup for the other parts is partly explained by the increased
number of iterations, as seen in the previous section. Increased
communication to computation ratio and slightly increased relative
load imbalance are other factors.
Already the figures presented in Section 5.2 showing large number
of external elements per processor and an imbalance in the number
of external elements per processor presented for the 3D problem on
512 processors indicated that the communication to computation ratio
would eventually become large. The performance obtained for the
iteration process in the linear solver supports this observation.
5.4.4. Impacts on the Overall Performance
In order to fully understand the total (combined) effect of the super-linear
behavior of the preconditioner and the moderate speedup of
the other parts of the linear solver, we now only need to investigate
how large proportion is spent on preconditioning out of the total time
required for solving the linear systems. This is illustrated in Table III.
Table
III. Time spent on preconditioning as percentage of total time spent in linear
solver.
2D problem
#Processors
Percentage 83.5% 77.1% 73.9% 69.2% 61.6% 49.9% 36.4%
3D problem
#Processors
Percentage 88.4% 78.9% 73.0% 66.0% 58.6% 39.4%
As the number of processors becomes large, the amount of time
spent on preconditioning becomes small compared to the time spent
on iterations, whereas the relation is the opposite for 2 processors on
the 2D problem and for 16 processors on the 3D problem. For example,
for 16 processors on the 3D problem 88.4% of the time in the linear
solver was spent on the factorization for the preconditioner, whereas
the corresponding number is only 39.4% for 512 processors. As long as
the preconditioner consumes a large portion of the time, its super-linear
speedup will have significant effects on the overall performance of the
implementation.
It is evident, that up to a certain number of processors, the super-linear
speedup of the incomplete LU factorization in the domain decomposition
based preconditioner is sufficient to give super-linear speedup
for the whole application. As the number of processor becomes large,
the factorization consumes a smaller proportion of the execution time,
and hence, it's super-linear behavior has less impact on the overall per-
formance. Instead, there are other issues that become more critical for
large number of processors, such as the increased number of iterations
in the linear solver.
6. Conclusions
This contribution presents the design and analysis of a parallel prototype
implementation of the TOUGH2 software package. The parallel
implementation shows to efficiently use up to at least 512 processors
of the Cray-T3E system. The implementation is constructed to
have flexibility to use different linear solvers, preconditioners, and grid
partitioning algorithms, as well as alternative Eos modules for solving
different problems. Computational experiments on real application
problems show high speedup for up to 128 processors on a 2D problem
and up to 512 processors on a 3D problem.
The results are accompanied by an analysis that explains the good
parallel performance observed. It also explains some minor variations
in performance following from the unstructured nature of the problem
and some super-linear speedups following from decreased work in the
preconditioning process.
The results also illustrate the trade-off between the time spent on
preconditioning and the effect of its result. With the objective to minimize
the wall clock execution time, we note that, for these particular
problems, smaller subdomains could be used, at least on small number
of processors.
We have seen some variations in performance in tests using three
different partitioning algorithms. Some of these variations clearly follow
from variations in the amount of work required, e.g., due to differences
in the time discretization. Further analysis is required in order to determine
whether these variations follow some particular pattern or if
they are only a result from unpredictable circumstances.
28 ELMROTH, DING AND WU
The problems we are targeting in the near future are larger both in
terms of number of blocks and number of equations per block. Moreover
should the simulation time be significantly longer. With increased problem
size we expect to be able to efficiently use an even larger number
of processors (if available), and longer simulations should not directly
affect the parallel performance.
Future investigations include studies of alternative non-linear solvers
and further studies of the interplay between the time stepping proce-
dure, the non-linear systems, and the linear systems. Evaluations of
different linear solvers, preconditioners and parameter settings would
be of general interest and may help to further improve the performance
of this particular implementation. A related study of partitioning algorithms
have recently been completed [4].
Acknowledgements
We thank Karsten Pruess, the author of the original TOUGH2 software,
for valuable discussions during this work, and the anonymous referees
for constructive comments and suggestions.
This work is supported by the Director, Office of Science, Office of
Laboratory Policy and Infrastructure, of the U.S. Department of Energy
under contract number DE-AC03-76SF00098. This research uses
resources of the National Energy Research Scientific Computing Center,
which is supported by the Office of Science of the U.S. Department of
Energy.
--R
'Aztec User's Guide.
'TOUGH User's Guide'.
Domain Decomposition.
--TR
BI-CGSTAB: a fast and smoothly converging variant of BI-CG for the solution of nonsymmetric linear systems
Domain decomposition
A parallel implementation of the TOUGH2 software package for large scale multiphase fluid and heat flow simulations
Performance of the CRAY T3E multiprocessor | performance analysis;software design;grid partitioning;groundwater flow;preconditioners;iterative linear solvers |
604412 | Asynchronous Transfer Mode and other Network Technologies for Wide-Area and High-Performance Cluster Computing. | We review fast networking technologies for both wide-area and high performance cluster computer systems. We describe our experiences in constructing asynchronous transfer mode (ATM)-based local- and wide-area clusters and the tools and technologies this experience led us to develop. We discuss our experiences using Internet Protocol on such systems as well as native ATM protocols and the problems facing wide-area integration of cluster systems. We are presently constructing Beowulf-class computer clusters using a mix of Fast Ethernet and Gigabit Ethernet technology and we anticipate how such systems will integrate into a new local-area Gigabit Ethernet network and what technologies will be used for connecting shared HPC resources across wide-areas. High latencies on wide-area cluster systems led us to develop a metacomputing problem-solving environment known as distributed information systems control world (DISCWorld). We summarize our main developments in this project as well as the key features and research directions for software to exploit computational services running on fast networked cluster systems. | Introduction
Cluster computing has become an important part of the high-end computing world.
Many of the applications traditionally run on high-end supercomputers are now successfully
run on computer clusters. In this paper we describe our experiences in building
high performance computer clusters and in linking them together as clusters of
clusters over wide areas. This approach towards an Australian "National Computer
Room" was started in 1996 and led us to two important conclusions about wide-area
computing. Firstly from a technical standpoint, the limitations of latency and of net-work
reliability are likely to be more important than bandwidth limitations in the future
and therefore successful wide-area systems need to be constructed accordingly.
ondly, the political and administrative issues behind supercomputer resource location
and ownership mean that in the future it will be attractive to loosely cluster compute
resources nationally and even internationally. Institutions have strong reasons to wish
to retain ownership and control of their own computer resources. Clusters of local re-
sources, each of which itself may be a cluster of some sort, are therefore an attractive
architecture to target for software development.
In this article we try to divide cluster computing systems issues into those relevant
to achieving high performance (typically on local area compute clusters) and those
issues pertaining to the successful sharing of resources between institutions (typically
wide-area issues).
The size and utilisation of computer clusters varies enormously. The recent IEEE
Workshop on Cluster Computing [3] demonstrated this as well as the different uses
of the term cluster computing. We will use cluster in its broader sense to describe a
collection of networked compute resources that may be heterogeneous in hardware and
in operating systems software, but which are capable of cooperating in some way on a
distributed application. This may be using any of the parallel computing software models
such as PVM[8], MPI[25] or HPF[17] or through some higher granularity object
or service based system like Globus[7], Legion[10], CORBA[27] or DISCWorld[16].
We use the term Beowulf-class cluster in the sense it now widely adopted to describe
a cluster that is dedicated in some sense, and is not just a random collection of networked
computers. The term "constellation" is now being adopted to describe a compute
cluster that uses shared memory or bus based technology. We focus on clusters
and Beowulf-class systems in describing our work in this paper.
At the time of writing the watershed sizes for clusters are limited by the number of
ports on commercially available network switches. For example, many groups report
work on modest clusters of 8 or 16 processors. We continue to operate our prototype 8
node Beowulf system for student experiments. Modern switches are typically capable
of interconnecting 16, 24 or even 48 compute nodes using Fast Ethernet at 100MBit/s
bandwidth between all nodes. The limits in size from a single switch therefore might be
used as the delimiting size for distinguishing large and small computer clusters. This
number is at present larger than the number of processors that can be economically
linked using shared memory technology. High end systems such as the ASCI systems
and Tera computer architecture do achieve considerably higher number of nodes interlinked
with shared memory, but can hardly be described as commodity economical
architectures.
Switches such as the 510 series from Intel allow switches to be stacked using high
bandwidth links between individual switches to preserve the full point to point band-width
between compute nodes. We have constructed a 120 node Beowulf class system
using this flat switch architecture and dual processor nodes. The Intel 510 series is
limited to linking 7 of these 24 port switches in this manner and this limit of 168
networked nodes represents well the limitations of current network technology. Fast
Ethernet technology is used to link individual nodes, with Gigabit Ethernet uplinking
used between the switch backplane and the file server node. Beowulf systems larger
than this are being built, but must use some sort of hierarchical network architecture
and are no longer "as commodity" in nature.
It is likely that network technology will move forward with economics and technical
improvements and that commodity switchgear will soon allow bigger systems.
systems will therefore continue to encroach on the traditional supercomputing
market area.
Scalable software to run on such systems is available for some applications prob-
lems, but our experience is that some effort is still needed to ensure the major contribution
towards latency in parallel applications running on clusters does not remain
the operating system kernel or communications software. Achieving low latency in
sending messages between nodes in a Beowulf system appears to be a more important
limitation at present than achieving high bandwidth.
We describe our experiences in constructing ATM-based clusters in section 2. We
attempt to summarise our metacomputing approach to cluster management in section 3
and describe how some of the latency and distributed computing problems, that are so
important to wide-area clusters, can be tackled. We review some of the networking
technologies actively used for cluster computing systems in section 4 and discuss their
relative technical and economic advantages.
In this section we describe some experiences from an experimental wide area system
we ran from 1996 to 1998, employing local and wide area ATM technology to connect
high performance computing clusters around Australia. The Research Data networks
(RDN) Cooperative Research Centre (CRC) was set up as a joint venture involving our
own University of Adelaide, the Australian National University in Canberra, Monash
University in Melbourne, the University of Queensland and Telecom Australia (Tel-
stra). The map shown in figure 1 shows the initial broadband network we were able
to set up and use. It connected Adelaide, Melbourne, Sydney, Canberra and Brisbane;
some of its latency and performance aspects are discussed below. We trialed links of
34Mbit/s and 155Mbit/s over various branches of the network and experimented with
various combinations of: cross mounted long distance filesystems; video conferencing;
simple shared whiteboard style collaborative software; and cross connected computer
clusters. We describe the Asynchronous Transfer Mode (ATM) technology used for
these experiments in section 4.
The main features of our experiments were to analyse the effects of having relatively
high wide area bandwidths available for cluster computing yet having perceptibly
high latencies arising from the long distances. We were able to construct well-connected
compute clusters having the peculiar bisection bandwidth and latency properties
that arise from having part of the cluster at each geographically separate site.
Although a number of wide area broadband networks have been built in the USA
[4, 26], it is unusual to have a network that is fully integrated over very long distances
rather than local area use of ATM technology. Telstra built the Experimental Broad-band
Network (EBN) [21] to provide the foundation for Australian broadband application
development. Major objectives of this were to provide a core network for service
providers and customers to collaborate in the development and trial of new broadband
applications, and to allow Telstra and developers to gain operational experience with
public ATM-based broadband services.
OCEAN
Kilometres
Network (EBN)
Broadband
Telstra's
Experimental
@KAH,Great Australian Bight
Bass Strait
Figure
1: Telstra's Experimental Broadband Network (EBN). The EBN is a 34Mbit/s
ATM network connecting research and commercial partner sites.
Figure
shows how our prototype storage and processing hardware were arranged
at both Adelaide and Canberra (separated by some 1200km) and connected by the
EBN. The Adelaide and Canberra cells consisted of a number of DEC AlphaStations,
interconnected locally by 155MBit/s ATM, and with the cells at the two sites connected
across the EBN at 34MBit/s.
We experimented extensively with cluster computing resources at Adelaide and
Canberra in assessing the capabilities of a distributed high performance computing
system operating across long distances. It is worth considering the fundamental limitations
involved in these very long distance networks.
Although we employ OC-3c (155MBit/s) multi-mode fibre for local area networking
we were restricted to an E-3 (34MBit/s) interface card to connect Adelaide to Melbourne
and hence to Canberra. In practice this was not a severe limitation as for realistic
applications were were unable to fully utilise a 155MBit/s link over wide areas
except in the most contrived circumstances. This would not be the case for a shared
link however.
The line-of-sight distances involved in parts of the network were: Adelaide/Melbourne
ASX-1000
Switch
Compute
File Servers
X-Terminals
DEC Alpha
SGI
Power
Challenge
Workstation
Multi-Mode Fibre, OC-3c (155Mbps)
Farm
Storage Works
RAID
Tape Library
DEC Alpha
Workstation
Farm
Storage Works
RAID
Canberra
Network (EBN)
Broadband
Experimental
Telstra's
Adelaide
Brisbane
Sydney
Melbourne
at Adelaide and Canberra
on the EBN
Project Resources
Gigaswitch
Fileserver
Figure
2: DHPC Project Hardware Resources at Adelaide and Canberra connected
via Telstra's EBN. While traffic can be sent between sites via the 'ordinary' Internet,
bandwidth-intensive experiments were carried out by specifically routing traffic over
the EBN.
732 km. Consequently the effective network distances between Adelaide and the other
cities are shown in table 1. The light-speed-limited latencies shown in table 1 are calculated
on the basis of the vacuo light-speed (2.997810 5 kms -1 ). It should therefore be
noted that this is a fundamental physics limitation and does not take into consideration
implementation details.
EBN City Network Distance from Adelaide (km) Light-speed-Limited Latency (ms)
Melbourne 660 2.2
Table
1: Inter-city distances (from Adelaide) and Light-speed-limited latencies for the
Experimental Broadband Network.
We made some simple network-performance measurements using the Unix ping
utility, which uses the Internet Control Message Protocol (ICMP), for various packet
sizes which are initiated at Adelaide, and bounced back from a process running at
other networked sites. These are shown in table 2. By varying the packet sizes sent it
is possible to derive crude latency and bandwidth measurements. It should be emphasized
these measurements are approximations of what is achievable and are only for
comparison with the latency limits in table 1.
The times in table 2 were averaged over 30 pings and represent a round-trip time.
Measurements are all to a precision of 1 ms except for those for Syracuse which had a
significant packet loss and variations that suggest an accuracy of 20 ms is appropriate.
The ping measured latency between Adelaide and Canberra appears to be approx-
Ping Mean Time Mean Time Mean Time Mean Time
Packet Canberra Syracuse USA local machine local machine
Size via EBN via Internet via ethernet via ATM switch
(Bytes) (ms) (ms) (ms) (ms)
1008
2008
8008 22 441
Table
2: Approximate Performance Measurements using Ping.
imately 15 ms. This is to be compared with the theoretical limit for a round-trip of
7.6 ms. The switch technology has transit delays of approximately 10 s per switch.
Depending upon the exact number of switches in the whole system this could approach
a measurable effect but is beyond the precision of ping to resolve. We believe that allowing
for non-vacuo light-speeds in the actual limitation our measured latency is close
(within better than a factor of two) to the best achievable. We believe that variations
caused by factors such as the exact route the EBN takes, the slower signal propagation
speed over terrestrial copper cables, the routers and switch overheads and small
overheads in initiating the ping, all combined, satisfactorily explain the discrepancy in
latencies. The EBN appears to provide close to the best reasonably achievable latency.
Also of interest is the bandwidth that can be achieved. The actual bandwidth
achieved by a given application will vary depending upon the protocols and buffering
layers and other traffic on the network but these ping measurements suggest an
approximate value of 2 8kB/(22 - 22.7MBit/s. This
represents approximately 84% of the 27MBit/s of bandwidth available to us, on what
was an operational network.
The Unix utility ttcp was useful for determining bandwidth performance measurements
which are outwith the resolution possible with ping. A typical achievable band-width
between local machines on the operational 155MBit/s fibre network is 110.3
MBit/s compared with a typical figure on local 10MBit/s ethernet of 6.586MBit/s Both
these figures are representative of what was a busy network with other user traffic on
them.
In 1998 and 1999 an additional link connecting Australia and Japan was made
available to us with a dedicated 1MBit/s of bandwidth available for experimentation.
We carried out some work in collaboration with the Real World Computing Partner-
ship(RWCP) [28] in Japan and found that there was an effective latency of around
200ms single trip between Adelaide and Tsukuba City in Japan over this ATM net-work
All these experiences suggest that the major limitations for wide area cluster computing
in the future are more likely to be from latency limitations rather than from
bandwidth limitations.
The Telstra Experimental Broadband network was closed at the end of 1998 and
we have had an alternative ATM network supplied by Optus Pty Ltd made available to
us in 1999. This network has similar latency characteristics but has the advantage that
we can apply to burst up to a full 155MBit/s between Adelaide and Canberra at will. In
practice, we have found that 8MBit/s is entirely adequate for our day to day wide-area
cluster computing operations.
It is interesting to reflect on our experiences in setting up and using these net-
works. Staff at Telstra's research laboratories were instrumental in setting up the initial
circuits and allocating bandwidth to our experiments. As the trials proceeded this became
a well automated process with little human intervention required except when we
needed to split bandwidth allocations between video, cluster and file system circuits
for example. One of the problems with the experimental set up was that only permanent
virtual circuits (PVCs) were available to us and that the network configuration
(on the routing hosts) needed to be changed (manually) each time a reconfiguration
was required. One of the promises of the ATM standard is that of Switched Virtual
Circuits (SVCs) which could be manipulated in software to reallocate bandwidth to
different applications. Although we were able to experiment with pseudo SVCs using
the proprietary facilities of our ATM switch gear, it was not possible to enable proper
switch virtual circuits across different vendors' switch gear at the different sites in our
collaboration. To our knowledge this is still not available on wide area ATM networks
and is, we believe a disappointing limitation of this technology in practice.
We expended considerable time and effort in carrying out these experiments, which
was particularly difficult when we had to reconfigure routers and operating systems
and driver software every time we needed to carry out a different experiment. ATM
technology has matured since the start of our experiments but we believe it will not
be the technology of choice at the cluster computing level. ATM is likely to still find
a place as a network backbone technology, certainly for long distance networks and
perhaps even still in local area networks. Driver support and administration tools for
ATM based cluster computing do not appear to be forthcoming and consequently we
believe it much more likely that Internet Protocols implemented on top of ATM or other
technologies perhaps will be much more useful for cluster computing.
3 Metacomputing Cluster Management
Our vision at the start of our wide area network trials was of a "National Computer
Room". We imagined this as a network and software management infrastructure that
would enable institutions to share access to scarce high-performance computing re-
sources. This seemed important given the scarcity of supercomputing systems in Australia
in particular. We have now revised this vision however. While there are still some
areas in which a computer sharing mechanism is important, the whole supercomputing
industry has continued to be shaken up. Global reduction in the number of supercomputer
vendors still in business can be attributed to a number of factors, but not least of
these is the greater availability of cluster computing resources. The need for institutions
to own and control their own resources appears to be a strong one, and relatively
cheap cluster computing systems enables this phenomena. We believe therefore that
although there is still a need for software to enable compute resource sharing, the resources
are more likely to be clusters themselves and this affects the characteristics of
software and applications that can integrate them.
Software such as Netsolve, Globus, Legion, Ninf and our own DISCWorld system
are all aimed at enabling the use of distributed computing resources for applications.
Our favoured approach is to encapsulate applications (including parallel ones) as services
that can run in a metacomputing environment and we have focussed our efforts
in to building Java middleware to allow this. Other approaches have taken experiences
from the parallel computing era to allow wide area systems to behave as a more tightly
coupled parallel system running multi processor applications across the wide area net-
works. The choice is a matter of granularity and of how to group parts of the application
together. The wide area parallel approach is attractive for its simplicity in that the
tools and technologies from parallel computing can be redeployed with only software
re-engineering efforts. The metacomputing approach recognizes that networks (espe-
cially wide-area ones will remain unreliable, and that long distance latencies will not
improve (short of some new physics breakthrough) and that the fundamental distributed
computing problems need to be considered when building wide-area cluster systems.
Distributed Information Systems Control World (DISCWorld) [16] is a metacomputing
model or framework with a series of prototype systems developed to date. The
basic unit of execution in the DISCWorld is that of a service. Services are pre-written
software components or applications. These are either written in Java or are legacy
codes that have been provided with a Java wrapper. Users can compose a number of
services together to form a complex processing request. Jobs can be scheduled across
the participating nodes [19]. An example DISCWorld application is a land planning
system [5], where a client application requires access to land titles information at one
site, and digital terrain map data at another, and aerial photography or satellite imagery
stored at another site. DISCWorld itself does not build on parallel computing
technology, but can embed parallel programs as services. A support module known as
JUMP provides integration with message-passing parallel programs which might run
on a conventional supercomputer as an or on a cluster[13].
An environment such as DISCWorld is ideal for use across a cluster or a distributed
network of clusters. The nodes within the cluster can be used as federated hosts connected
via a high-speed, dedicated network. Some or all of the nodes can be used as
a parallel processor farm, running those services which are implemented as parallel
programs. We employed both these approaches in our multi cluster experiments across
our ATM network. The ability to make intelligent decisions on where to schedule services
in the DISCWorld environment, in order to minimize either execution time or
total resource cost relies on the ability to characterise both the services and the nodes
in the environment.
Fast Networking Technologies
In this section we review the major networking technologies and describe their roles
for wide area and high-performance cluster computing. We discuss Asynchronous
Transfer Mode (ATM); Fast Ethernet; Gigabit Ethernet; Scalable Coherent Interconnect
(SCI); and Myrinet technologies. Each of these technologies may have differing
roles to play in wide-area and cluster computing over the next few years.
Table
3 summarises the networking technologies we consider here. The data in the
table has been combined from our own benchmarking experiments as well as reports
in the literature [6, 22, 23, 29]. It can be seen that for problems which do not require
gigabit-order bandwidth Fast Ethernet seems a logical choice. If gigabit-order band-width
is required, the decision is not so clear-cut: there is a complex trade-off between
the bandwidth and latency attainable, the approximate cost per node, and whether the
technology is considered commodity. When we use the term commodity, we refer to
the fact that the technology is available to the mass market, and is available from more
than a few specialist vendors.
Technology Theoretical Measured TCP/IP Latency Approx Cost
Bandwidth Bandwidth Per Node ($US)
Ethernet 10MBit/s 6.58MBit/s 1.2ms 80
Fast Ethernet 100MBit/s 68MBit/s 1ms 150
ATM 155MBit/s 110.3MBit/s 1ms 2500
Gigabit Ethernet 1000MBit/s 950MBit/s 12ms 1200
SCI 1600MBit/s 106MBit/s 4s 1400
Myrinet 1200MBit/s 1147MBit/s 117s 1700
Table
3: Interconnection network technologies characteristics as measured and also
reported in the open literature.
We were able to trial our ATM network of clusters using sponsorship from the
Australian Commonwealth government. Table 3 shows that ATM is still not an especially
economic choice. It is our belief that Gigabit Ethernet technology will be widely
adopted and that it will be correspondingly driven down in price.
4.1 ATM
Asynchronous Transfer Mode (ATM) [1] is a collection of communications protocols
for supporting integrated data and voice networks. ATM was developed as a standard
for wide-area broadband networking but also finds use as a scalable local area networking
technology. ATM is a best effort delivery system - sometimes known as bandwidth-
on-demand, whereby users can request and receive bandwidth dynamically rather than
at a fixed predetermined (and paid for) rate. ATM guarantees the cells transmitted in a
sequence will be received in the same order. ATM technology provides cell-switching
and multiplexing and combines the advantages of packet switching, such as flexibility
and efficiency of intermittent traffic, with those of circuit switching, such as constant
transmission delay and guaranteed capacity. ATM uses a point-to-point, full-duplex
transmission medium, and provides connection-oriented protocols.
In addition to supporting native ATM protocols such as ATM Adaption Layer
(AAL) 3/4 and 5, the use of LANE allows the ATM network to be viewed as part
of a TCP/IP network (to allow multicast and broadcast). There have been a number of
studies that consider the use of the AALs for high-speed interconnections in parallel
computing [20]. We employ a 155MBit/s local-area ATM network [14, 15] as well as
a 34MBit/s broadband network (EBN).
ATM allows guaranteed bandwidth reservation across a circuit (a link or a number
of links with defined end-points. There are three different types of bandwidth reser-
vation: constant bit rate (CBR); variable bit rate (VBR); and available bit rate (ABR).
CBR is used when the traffic between sites is constant and will rarely, if ever change in
bandwidth requirements. VBR is used to characterise traffic that has a mean bandwidth
requirement that can change slightly. Traffic along a VBR circuit can operate at the cir-
cuit's maximum bandwidth only for short amounts of time; intermediate switches may
drop cells that exceed the VBR parameters. Finally ABR exists for bursty traffic which
cannot be easily characterised. When ABR traffic is sent it uses any available link ca-
pacity; if there is not enough link capacity for the traffic the ABR traffic is queued until
the buffers fill, in which case cells are dropped.
ATM has been popular with the Telecommunications Industry [21] for broadband
networks, where CBR and VBR traffic is used for dedicated (statically- and dynamically-
allocated) customer links (for voice and constant-rate data). As typical data is characterised
as bursty, it does not make sense to reserve bandwidth in a CBR or even VBR
capacity. ABR, which uses any available bandwidth on an ATM link, must be used to
avoid wasting capacity. There have not been many sites that have adopted ATM as a
local area network. The major factor prohibiting widespread adoption is the cost of
ATM switches and of ATM interface cards for individual nodes.
4.2 Other Notable Networking Technologies
High Performance Parallel Interface (HiPPI) is a point-to-point link that uses twisted-pair
copper cables to connect hosts via crossbar switches. HiPPI gets its name because
data is transmitted in parallel; the connecting cables have 50 cores: are used to
transmit data, one bit per line. The standard allows for transmission rates of 800MBit/s
and 1600MBit/s but the maximum length of copper cables is 25 metres. A serial version
of HiPPI is available, using fibre optic media, which allows a maximum distance of
10km between ends. HiPPI has been successfully used for networking supercomputer
systems, but now seems likely to be overtaken by other cheaper technologies.
Fibre Channel (FC), defined by the Fibre Channel Standard, is a circuit-switching
and also packet-switching technology that allows transmission at multiple rates. Data is
sent in frames of 2148 bytes (2048 bytes of data). FC is successfully used in interconnecting
hosts but is also likely to be overtaken by the economics of other technologies.
4.3 Internet or Native Technology Protocols
Many networking technologies have proprietary communications protocols, such as
ATM Adaptation Layers (AAL). While these proprietary protocols provide optimized
communication, there is usually a trade-off for code portability.
Using proprietary protocols can also be fraught with pitfalls. The majority of users
do not wish to make every optimisation to their code, or because users are working with
heterogeneous mixtures of networking hardware, the custom protocols are not widely
used. For example, experiments with ATM's AAL5 uncovered a bug in the implementation
that the manufacturer's engineers said could only be fixed by purchasing a much
faster machine [20]. We experimented with implementing message passing communications
software on raw ATM Adaptation layers and conclude that since this approach
has not been widely supported by ATM vendors, this will not be a feasible approach in
the long term. We conclude from our experiments that using a well-known, standard
protocol such as IP is the easiest, and most portable approach. Most manufacturers
provide for the encapsulation of IP packets within their proprietary protocols (such as
ATM's LAN Emulation). IP promises to be around a long time with the advent of IPv6,
which features IP tunneling, allowing standard IP traffic to be encapsulated within IPv6
packets[18].
In summary, we have found that while 10MBit/s desktop connections are still very
common, most current machines are now capable of effectively utilising a 100MBit/s
connection. For clusters of workstations with medium bandwidth requirements 100MBit/s
Fast Ethernet is a viable and affordable solution. If high bandwidth is required then a
trade-off between the cost and performance is necessary: SCI provides the best latency
but the measured bandwidth is nowhere near as high as Myrinet; Myrinet has the largest
bandwidth of all the systems we consider, and the latency is still under a millisecond. If
a larger latency can be tolerated, and bandwidth is not critical, then we believe the open
standards of Gigabit Ethernet may be an appropriate choice. We reiterate that we believe
the economics of large markets will ensure Gigabit Ethernet technology becomes
widespread for cluster computing.
5 Summary and Conclusions
In this article we have reviewed a number of fast networking technologies for cluster
computing and related our experiences with ATM in particular. Preliminary experiments
and experiences now being reported leads us to believe that the combination of
Fast Ethernet and Gigabit Ethernet will be the chosen route for most general purpose
Beowulf class cluster computing systems. We believe that the technical criteria being
fairly finely balanced with only factor of two advantages for one technology over an-
other, that the economics of the mass market will dictate which technology becomes
most widespread. This of course is a feedback phenomena as cheaper switches and
network cards will be adopted even more widely.
We believe ATM may still have a role to play in wide area cluster computing systems
but it is preferable that it be transparent to cluster users and cluster system soft-
ware. Internet protocol will surely be implemented on top of ATM and users and cluster
software will continue to interface to that.
We believe there are still many interesting developments to be made in software to
manage wide area clusters as well as high-performance clusters. It seems important
to recognise that will message passing level technology can play a useful part in high
performance systems, there is some considerable research still to be done to address
the latency and reliability issues for wide area cluster systems.
6
Acknowledgements
Thanks to J.A.Mathew for assistance is carrying out some of the measurements reported
in this work. It is also a pleasure to thank all those who helped in conducting
trials of the EBN: K.J.Maciunas, D.Kirkham, S.Taylor, M.Rezny, M.Wilson and
M.Buchhorn.
--R
Available at http://www.
CASA Project.
A Comparison of High Speed LANs
The Grid: Blueprint for a New Computing In- frastructure
PVM: Parallel Virtual Machine A Users' guide and Tutorial for Networked Parallel Computing
Gigabit Ethernet Alliance.
IEEE Standards On-line
IEEE Standards On-Line
Geographic Information Systems Applications on an ATM-Based Distributed High Performance Computing System
DISCWorld: An Environment for Service-Based Metacomputing
High Performance Fortran Forum (HPFF).
Internet Engineering Task Force.
Scheduling in Metacomputing Systems.
Using ATM In Distributed Applications.
Telstra's Experimental Broadband Network.
A comparison of two Gigabit SAN/LAN technolo- gies: Scalable Coherent Interface and Myrinet
An Assessment of Gigabit Ethernet as Cluster Interconnect.
ATM Performance Characteristics on Distributed High Performance Computers.
Message Passing Interface Forum.
Applications and Enabling Technology for NYNET Upstate Corridor.
Object Management Group.
3Com Corporation.
--TR
PVM: Parallel virtual machine
The grid
DISCWorld
Myrinet
Geographic Information Systems Application on an ATM-based Distributed High Performance Computing System
Geostationary-satellite imagery applications on distributed, high-performance computing
Legion-a view from 50,000 feet
An Assessment of Gigabit Ethernet as Cluster Interconnect | DISCWorld;gigabit ethernet;ATM;fast ethernet;metacomputing;cluster computing |
604552 | Piecewise Self-Similar Solutions and a Numerical Scheme for Scalar Conservation Laws. | The solution of the Riemann problem was a building block for general Cauchy problems in conservation laws. A Cauchy problem is approximated by a series of Riemann problems in many numerical schemes. But, since the structure of the Riemann solution holds locally in time only, and, furthermore, a Riemann solution is not piecewise constant in general, there are several fundamental issues in this approach such as the stability and the complexity of computation.In this article we introduce a new approach which is based on piecewise self-similar solutions. The scheme proposed in this article solves the problem without the time marching process. The approximation error enters in the step for the initial discretization only, which is given as a similarity summation of base functions. The complexity of the scheme is linear. Convergence to the entropy solution and the error estimate are shown. The mechanism of the scheme is introduced in detail together with several interesting properties of the scheme. | Introduction
. Self-similarity of the Cauchy problem for one dimensional conservation
laws,
with Riemann initial data
has been the basis of various schemes devised for general initial value problems, Glimm
[9] and Godunov [10], for example. The self-similarity of the Riemann problem is the
property that the solution is a function of the self-similarity variable x=t. In other
words the solution is constant along the self-similarity lines
x
The basic idea of the Godunov scheme for a general initial value problem is to approximate
the initial data by a piecewise constant function and then apply the self-similarity
structure to the series of Riemann problems.
There are two basic issues we have to consider immediately in this kind of ap-
proach. First, since the self-similarity for a piecewise constant solution holds locally
in time only, the structure of the Riemann problem can be applied for a small time
period. In other words the scheme is not free from the CFL condition and, hence, the
scheme can march just a little amount of time every time step and it costs computation
time. Furthermore, since rarefaction waves appear immediately, the solution
is not piecewise constant anymore. So a numerical scheme contains a process which
rearranges the rarefaction wave into a piecewise constant function every time step.
The numerical viscosity enters in this process and tracking down the behavior of the
scheme becomes extremely hard.
IMA, University of Minnesota, Minneapolis, MN 55455-0436 (yjkim@ima.umn.edu).
LeVeque [15] considers a large time step technique based on the Godunov method
for the genuinely nonlinear problem. In the scheme the CFL number may go beyond
1, and it is even possible to solve the propagation of a simple wave in a single step,
for the given final time T ? 0. However the scheme handles interactions
between waves incorrectly if the CFL number is so large.
One way to avoid the rearranging process is to consider a modified equation,
where h and u 0 approximate f and v 0 respectively. Dafermos [6] considers a polygonal
approximation h f , i.e., h is a continuous, piecewise linear function. In the case
the exact solution of (1.4) is piecewise constant. So the method does not require
a rearranging process and, hence, it does not introduce numerical viscosity and the
error is controlled by taking the polygonal approximation h. In this approach the
exact behavior of the solution can be monitored more closely and we may get a more
detailed understanding of the numerical scheme based on this approach. This idea
has been developed in Holden and Holden [11], and it has been extended to multi-dimensional
problems in Holden and Risebro [13] and to systems of conservation laws
in Holden, Lie and Risebro [12]. In particular we refer Bressan [2], [3] for systems.
This front tracking method has been developed, especially by the Norwegian School,
as a computational tool.
Lucier [17] approximates the actual flux f by a piecewise parabolic function h and
achieves a second order scheme. In the case the initial data v 0 (x) is approximated by a
piecewise linear function u 0 and the solution remains piecewise linear. The difference
between the solutions of the original problem (1.1) and the modified problem (1.4) is
estimated by
Since the linear approximation is of second order, he achieves a second order scheme
for a fixed time t ? 0.
If we want to design a numerical scheme which represents the exact solution, we
have to find a way to choose the grid points correctly. If they are simply fixed, it
is clear that the scheme can not represent the exact solution and, hence, we need to
rearrange the solution to fit the solution to the fixed grid points. So it is natural
to consider moving mesh method, see Miller [18]. In Lucier [17] the moving mesh
method is used to find the exact solution of (1.4), where mesh points move along
characteristics. Another option is not to use any grid point. In numerical schemes
based on the front tracking method we mentioned earlier grid points are used just for
the initial discretization. The scheme we consider in this article does not use any grid
point, neither.
This article has two goals. The first one is to introduce the mathematical idea
which is behind the piecewise self-similar solutions. The second one is to demonstrate
how to implement the idea into a numerical scheme and show properties of the scheme.
From the study of the Burgers equation [14] it is easily observed that the primary
structure which dominates the evolution is a saw-tooth profile. In fact it is a series of
N-waves and eventually the solution evolves to a single N-wave, see Liu and Pierre [16].
The starting point of our scheme is to use this structure as the unit of the scheme.
This scheme has several unique properties that other schemes based on piecewise
constant functions do not have.
Piecewise Self-similar Solutions 3
Suppose that u(x; t) is a special solution of (1.1) which is a function of the self-similarity
variable x=t. Then the self-similarity profile (or the rarefaction wave),
easily derived from (1.1). It is natural to expect that characteristic
lines pass through the origin, i.e., they are compatible with self-similarity lines (1.3).
The piecewise self-similarity initial profile is considered in the sense that
Note that the time index t k can be a negative number. In this article we show that
the solution of (1.1) with piecewise self-similarity initial profile has such a structure
for all t ? 0, i.e.,
and give the explicit formula for this kind of solutions under several situations. First
we consider a convex flux with positive wave speed,
where f is locally Lipschitz continuous. The convexity of the flux f 00 (u) 0 is to get
the explicit formula 'g' of the self-similarity profile such that f 0 and the
self-similarity profile (1.7) can be written as
Note that the equality is included for the second derivative of the flux in (H) and,
hence, the monotonicity of f 0 is not strict and g is not exactly the inverse function of
f 0 and g(f 0 (u)) 6= u in the case. In this approach we may include a piecewise linear
flux of the front tracking method, see Remark 6.4.
In section 3 we consider a piecewise self-similar solution which can be written as
a self-similarity summation (or simply S-summation),
of finite number of base functions. We give definitions for the S-summation and base
functions in the section and show that u(x;
(x) is the solution
of (1.1) with initial data u
Theorem 3.6. We consider u
as an approximation of the solution v with the original initial data v 0 . Then the L 1
contraction theory of conservation laws implies
It is the estimate corresponding to the error estimate (1.5), which does not have the
time dependent term anymore. It is natural to expect that the error increases in time
if the flux is changed. In our approach we use the original flux and the error decreases
in time. The convergence of the scheme is now clear (see Theorem 3.6, Corollary
3.7). Note that the self-similarity summation (1.9) represents only special kind of
piecewise self-similar profiles (1.6), which have positive indexes t k ? 0 and are ordered
appropriately, i.e., c if an b n ::: a 2 b 2 ::: a 1 b 1 .
4 Y.-J. Kim
The self-similarity summation is coded for a numerical scheme successfully in Section
4. This scheme has several unique properties. First it does not require a time
marching procedure. So the complexity of the scheme is of order O(N ), not O(N 2 ).
Second it captures the shock place very well even if small number of base functions
(or mesh points) are used, Figure 4.3. In Figure 4.5 it is clearly observed that the solution
with finer mesh always passes through bigger artificial shocks and this property
provides a uniform a posteriori error estimate of the numerical approximation. Since
it does not introduce numerical viscosity at all, we may get very good resolution for
an inviscid problem. Our scheme also distinguishes physical shocks and artificial ones
clearly. Table 4.1 shows the time when the physical shock appears.
In Section 5 we generalize the method. For a general convex flux case,
the method is applied through the transformations (5.1) and (5.3). If the flux has
inflection points, then the scheme becomes considerably complicate and it is beyond
the purpose of this article. But, if the flux has only one inflection point, for example,
then we can easily apply the scheme through a similar transformation (5.4). Dafermos
considers a flux with a single inflection point through generalized characteristics.
The flux of the Buckley-Leverett equation satisfies this condition. The flux
which appears in thin film flows (see Bertozzi, Munch and Shearer [1]), also
belongs to this category. Figure 5.3 shows the strength of our scheme over the upwind
scheme in this case.
The scheme is not good enough for a short time behavior t !! 1 since the initial
error controlled efficiently. To resolve the situation we add an
extra structure to base functions in Section 6. Using these base functions we can
approximate the initial data with second order accuracy and still solve the exact
solution for the modified initial datum without the time discretization. Furthermore,
a general piecewise self-similarity profile (1.6) can be written in terms of self-similarity
summation of these modified base functions.
2. Self-similarity of conservation laws. Consider one dimensional scalar conservation
laws,
where the flux f is locally Lipschitz continuous. For a nonlinear flux f(u) the solution
may have a singularity and hence the solution is considered in the weak sense with
the entropy admissibility condition :
~
for any number ~ u lying between
a conservation law is from the fact that a rescaled function,
Piecewise Self-similar Solutions 5
is also the solution of (2.1) if and only if the initial profile u 0 (x) satisfies
It is clear that the Riemann initial condition,
ae
satisfies (2.4) and, hence, u(x; is a function of
the self-similarity variable,
The structure of a Riemann solution is given in Figure 2.1 together with characteristic
lines. Note that a self-similarity line R is not a characteristic line
and the solution is constant along it. This is a special property of Riemann problem
and it is not expected in a general situation. If the solution is constant along a line,
it is natural to assume that the line is a characteristic line and it is the starting point
of our scheme.
x
st
x
(a) Characteristic lines (b) Self-similarity lines
Fig. 2.1. Let f 0 (u+ lines are different from characteristic
lines. Even though the solution is constant along self-similarity lines.
If the total mass of the initial data u 0 (x) is finite,
Z
then the relation (2.4) cannot be satisfied since the transformation u
does not preserve the total mass. So the solution cannot be a function of self-similarity
variable x=t. In the following we consider techniques to achieve the Riemann
solution like self-similarity for general Cauchy problems.
In the Godunov method the space is discretized into small intervals and the initial
function u 0 is approximated by a step function which takes the cell average over those
intervals. Then the problem can be considered as a sequence of Riemann problems
and the structure of a Riemann solution holds locally in time and space. The scheme
finds the cell average of the solution after a small amount of time using the self-similarity
structure of Riemann solutions. It is fair to say that this method is more
6 Y.-J. Kim
focused on the structure of the Riemann initial data which makes the self-similarity
of the problem rather than the self-similarity itself. As a result the method takes
cell-averages every time step and loses the accessibility to the exact solution.
In the front tracking method the nonlinear flux f(u) is approximated by a continuous
function hm (u) which is linear between points f k
example, and
then the initial datum is approximated by a piecewise constant function by taking
these values, not cell averages. Then every discontinuity propagates as an admissible
shock of the modified problem,
in the sense of entropy condition (2.2) until it may possibly collide to other shocks.
We may say that the self-similarity of the original problem (2.1) has been modified
to get it fit to the piecewise constant functions. In this approach the exact solution
of the modified problem is accessible and, hence, the method can be employed as an
analytical tool as well as a computational one.
Now we suggest a new approach which keeps the self-similarity globally in time.
Suppose that characteristic lines of the solution u(x; t) pass through the origin. Then
we have
Since the right hand side diverges as t ! 0, we consider the initial datum as the
profile at a given time simplest case of L 1 initial datum of the kind is
Characteristic lines of this initial profile are given in Figure 2.2. Non-vertical characteristics
pass through the point (0; \Gammat 0 ) and there is a region in which characteristic
lines overlap with each other. The solution is given by finding the shock characteristic
correctly. In this case the shock characteristic is not a straight line
and the solution is not a function of x=(t+ t 0 ). Even though the solution is a function
of in the region
Since the shock speed s 0 (t) satisfies the Rankine-Hugoniot jump condition, the
shock place s(t) can be decided by its integral form. On the other hand, if the
convexity of the flux f is assumed,
we may consider the self-similarity profile g such that f 0 x. In the case
on (0; s(t)) and we can find the shock place s(t) easily from
the equal area rule,
(2.
Piecewise Self-similar Solutions 7
x
Fig. 2.2. Characteristic lines of a self-similarity solution are similar to self-similarity lines.
The main difference is that the shock characteristic is not a straight line anymore.
Since the conservation law (2.1) does not depend on the x variable explicitly, we
may translate the initial data (2.9) in the x direction. We can also consider initial
data which consist of finite number of these structures. A simple case is
where centers c k and shock places s k satisfy
The time index t k in (2.12) decides the slope of the initial profile and they can be
chosen differently. Condition (2.13) implies that all profiles in (2.12) are separated.
If not, the simple summation in (2.12) breaks down the self-similarity structure we
want to keep. In Section 3 we consider a self-similarity summation which preserves it.
Figure
2.3 shows characteristic lines for initial data (2.12) with 4. In this case
tracking down a shock is more complicate and (2.11) is not enough for the purpose
since waves interact to each other.
x
Fig. 2.3. Shock characteristics of (2.12) interact together and make a bigger shock.
3. Piecewise self-similar solutions. In this section we give the definition of
the self-similarity summation and show that the exact solution of (2.1) is given as
an S-summation. Notations in this section are directly converted into a numerical
scheme in Section 4. In this section we consider a flux under the hypothesis,
8 Y.-J. Kim
In this case the self-similarity profile 'g' is the profile which satisfies f 0 As
it is mentioned earlier g is not exactly the inverse function of f 0 since the monotonicity
of f 0 is not strict. We also assume f 0 this section for our convenience, and
it implies that the solution is actually assumed to be positive under (H). The result
of this section are generalized in Section 5.
3.1. Base functions. As it is mentioned earlier, the self-similarity profile
represents the asymptotic behavior of the conservation law (1.1). The function,
ae g
serves as a base function in this article. A base function has the self-similarity profile
over the interval between the center 'c' and the shock place `s'. The area (or the mass)
enclosed by the x-axis and the base function is given by
c
Z s\Gammacg(x=t)dx =: m(t; c; s):
It is convenient to consider the mass m as the fourth index of the base function, say
m;t;c;s (x), or any three of them as an index set. In any case we consider it under the
assumption that indexes m; t; c; s satisfy the relation (3.3). So if any three of them
are given, the fourth one is decided by the relation.
Consider a Cauchy problem,
It is clear from (2.10) that the solution u(\Delta; t) has the self-similarity profile with time
index t+t 0 between the original center c 0 and a new shock place s(t). Since the initial
total mass m 0 should be preserved, the solution of (3.4) is
where the shock place decided by the relation (3.3).
Remark 3.1. If we take a ffi-function as the initial datum, for example
the solution is given by u(x; t 0
(x). So the slope of the
base function represents the time of the evolution starting from the ffi-function like
initial data, and that is why we take index t for the base function.
Remark 3.2. For the Burgers case, the self-similarity profile is
given as the identity function, x. In the case (3.3) gives following relations,
Remark 3.3. The rescaling (2.3) does not preserve the total mass. So it can
not measure the invariance property for L 1 solutions of conservation laws. For the
Burgers equation we consider
Piecewise Self-similar Solutions 9
where the rescaling preserves the total mass. We can easily check that variables
are invariant under the rescaling after the translation t. These
variables are called self-similarity variables for L 1 Cauchy problems, and the Burgers
equation is transformed to
We can easily check that Bm0 ;t 0 =1;c0=0 (i) is an admissible steady state of the equation
and hence w(i; is the solution of (3.9). If we transform the
variables back to u; t; x, then we get u(x; This is another way
to show (3.5). In this example we can see that the approach with piecewise self-similar
solutions captures the self-similarity of the general Cauchy problems exactly. For a
detailed study for the transformed problem (3.9) we refer [14].
3.2. Self-similarity Summation. Since the solution of (3.4) is given by (3.5),
we can easily guess that
is the solution of the conservation law with initial data
if all the supports of the base functions in (3.10) are disjoint. But it is not usually
the case since the support of a base function expands in time. The self-similarity
summation(or simply S-summation),
is to handle the case that supports of base functions overlap with each other. The
definition is given inductively in the following.
(x). Suppose that
(x) is well
defined and supp(B
c
. Suppose there
exists a point j ? c j such that
Under the assumption of (3.13), the left hand side of (3.14) is monotone in j and,
hence, such a point is unique. If there is no such a point we say that the S-summation
(3.12) is not defined. If there exists such a point
ae g
Base functions are ordered by centers c k and then the S-summation is given from
the right hand side. It is because of the positiveness assumption for the wave speed,
(H). If the order of the summation is changed, the result is different. So
the S-summation is not associative.
Remark 3.4. If the time indexes are identical, t then we can
show the S-summation (3.12) is well defined. If then, since the self-similarity profile g
is an increasing function, we have g
has values of g
(3.15), the inequality (3.13) is
satisfied for all j 2 R. Furthermore the left hand side of (3.14) has value
for and diverges to 1 as j !1. So there exists a point j satisfying (3.14)
and the S-summation is well defined.
Remark 3.5. We may consider j as the j-th shock generated by the base
function Bm j
. Suppose that j i.e., the j-th shock caught up the (j-1)-th
shock. The definition (3.15) implies that the self-similarity profile g
disappears. We can easily check that we will get the same S-summation (3.15) if we
remove the (j-1)-th base function and modify m j by adding . This property
represents the irreversibility of the conservation laws.
Theorem 3.6. Suppose that the flux f(u) satisfies Hypothesis (H). If the self-similarity
summation
(x) is well defined, then u(x;
+t;ck (x) is also well defined and it is the solution of (1.1) with initial
data u 0 . If v(x; t) is the entropy solution of (1.1) with initial data
Proof. The proof is completed through inductive arguments. Suppose that
+t;ck (x) is the solution with the initial condition
(x). It is assumed that u j (x;
(x) is well defined
and we let u j (x; t) be the solution of (1.1) with this initial data. Let
the shock characteristic given by the j-th base function, i.e., j for the j in
(3.13,3.14). If x ? j (t), then it is clear that u j (x; since characteristics
on the right hand side of do not interact with it because f 0 (u) 0.
since the vertical characteristics starting in the region x
do not touch shock characteristics moving to the right hand side. The characteristic
passing through a point (x; is a straight line connecting
and, hence, u
. Since the total mass is preserved, the shock
place should satisfy
+t;ck (x) from the definition of the S-summation and the first
part of the proof is completed. The second part (3.16) is simply the L 1 contraction
theory for conservation laws.
In the proof we employ the theory of characteristics (see [8], ch. 11). The error
estimate (3.16) implies that the initial error decreases in time, and the solution with
modified initial data is obtained exactly in a single step for any given time t ? 0. The
scheme has ideal properties for the study of asymptotic behavior.
Now we consider u
(x) as an approximation of L 1 initial
data v 0 . Let a partition be the set of centers. Its norm is defined
Piecewise Self-similar Solutions 11
by j. There can be many ways to discretize the initial data. To
guarantee the convergence of the scheme, we need the existence of ffi; L ? 0 such that
where a constant " ? 0 is given. An example of such a discretization is given in
Section 4.1. The convergence of the scheme satisfying (3.17) is clear from (3.16).
Corollary 3.7. (Convergence) The scheme of the self-similarity summation
(x) with initial discretization u
satisfying (3.17) converges to the entropy solution v(x; t) with initial data
as
Remark 3.8. Now we consider the S-summation between two base functions,
Figure
3.1. It gives a good example to figure out
the meaning of the S-summation. Furthermore, in the numerical computation, we
can possibly compare only two base functions each time and, hence, it is worth to
consider it in detail. If these two base functions are separated, s then the shock
place of the definition (3.15) is simply
Z
implies that two base functions are merged, i.e.,K
Suppose that s 1 is far away and the shock place is guaranteed to be between s 2
and s 1 . Then (3.18) can be written as
Z c1
Z
The solution of (3.19) has a special meaning in the coding. We define as an
operator between two base functions, Bm2
:= . Note that in the
definition of the operator we do not use the information m 1 at all. We just assume it
is big enough and, hence, ! s 1 . This operator is used in Section 4 to check if two
adjacent base functions are merged or not.
For the Burgers case, implies that, in Figure 3.1, Trapezoid
has the same area as Triangle Ac 1 , and the relation (3.19) can be written as
an algebraic relation,
The operator ' ' between two base functions is now given explicitly,
c 2+s 2\Gamma2c 2 s2
which is the solution of the algebraic relation (3.20) with
We consider this operator in Section 4.1 again.
A
ff
Fig. 3.1. The equal area rule gives the shock place when two base functions interact together.
4. Coding Strategy. In this section we show how the self-similarity summation
can be implemented into a numerical scheme. To see what is really happening in each
step it is helpful to consider a specific example. For that purpose we consider the
Burgers equation,
The result of the scheme is compared with the Godunov scheme.
4.1. Implementation. Here we introduce a grid-less scheme based on the self-similarity
(a) The Equal Area Rule (b) Base Functions with overlaps
Fig. 4.1. Initial data are approximated by a piecewise self-similarity profile. It turns out to be
an S-summation of base functions.
Step 1. (Initial discretization) The first step is to design a method to approximate
the initial datum v 0 (x) by a self-similarity summation u 0 (x) which satisfies
(3.17). Consider n base functions B[k]; Each element B[k] consists
of two members B[k]:m; B[k]:c, which represent the area and the center of the base
function. We use identical time index t hence, we do not need a
member for the time index.
Piecewise Self-similar Solutions 13
0 be a cell average approximation of v 0 with steps of mesh size
profiles with time index t 0 ? 0 which pass through
the left end points of the constant parts of the step function v "
Figure
4.1 (a). Let
B[k]:c be the x-intercept of the k-th self-similarity profile from the right hand side
and B[k]:m be the area enclosed by x-axis,
, and the k-th and the
profiles. This discretization is well defined only if B[n]:c ! ::: ! B[1]:c. To
achieve it a small initial time index t 0 should be chosen depending on the initial data.
Since the initial self-similarity profile of the example (4.1) is a line with the slope
1=t 0 and the slope of the initial data is bounded by v x (x; we have to take t 0 ! 1.
In
Figure
4.1 the initial data in (4.1) has been discretized using 10 base functions,
base functions (b) have
some overlaps and the self-similarity summation (a) has a saw-tooth profile. The size
of the triangle like areas added and subtracted by the approximation is proportional
to " 2 and the total number them is proportional to 1=". So we have jjv
Theorem 3.6 says u(x;
is the solution with the
modified initial data u 0 . So the rest of the scheme is focused on how to display the
given solution. Even if it is possible to follow the inductive arguments of the definition,
we will get serious complexity in the coding if behind shocks capture the front ones,
i.e., . In the case the S-summation is not changed even if two base functions
are merged, Remark 3.5, and hence we do the merging process first. From now on
the corresponding time index is t for each k.
Step 2. (Merging) The operator ' ' between two base functions defined by (3.19)
for the general case or by (3.21) for the Burgers case plays the key role here. Suppose
that there is no contact between shocks for k
Then we can easily check that the k-th shock in (3.15) is given by
1. Suppose that
In the case j 6= B[j] B[j \Gamma 1] in general. Even though it implies j ? j \Gamma1 and the
self-similarity profile of the (j-1)-th base function B[j \Gamma 1] disappears, Remark 3.5. In
the case these two base functions B[j] and B[j \Gamma 1] can be combined, i.e., put
remove then rearrange the array B[\Delta] from
is the number of base functions left after the previous step. Since the combined base
function may take over another one again, we decrease the index j if j 6= 2. If (4.2)
does not hold, we increase index j. We continue this procedure from
Note that there is no base function B[0] and we use B[1] B[0] := B[1]:s in (4.2) for
given by the relation (3.3).(Step 2 is complete.)
If there are no base functions merged together, there will be
of (4.2). If m base functions are merged, then base functions are left and the
maximum number of the comparison (4.2) is n In Figure 4.2 (b) base
functions at time are displayed after the merging process. There were 50 base
functions initially (a) and 38 of them are left after the merging step. It means that
small base functions has been merged together and made a big base function. The
big base function can be considered as an accumulation of small artificial shocks in
some sense and it represents the physical shock.
14 Y.-J. Kim0.050.150.25
(a) 50 Initial Base functions (b) 38 Base functions left at
Fig. 4.2. The initial base function with slope 1=t 0 has slope 1=(t0 + t) at time t without area
change. After the merging process, Step 2, some of the base functions are merged together and make
a big base function which represents a physical shock.
Now we are ready to display the solution. Suppose that base functions B[j];
are left after the merging step. Let Then the right and
the left hand side limits are given by,
So to display the solution it is enough to plot the points
Between these point the solution has the self-similarity profile. So if
we connect these points with self-similarity profile with time index
B[j]:c, we get the solution. In Figure 4.3 solutions are displayed using different number
of base functions. We can clearly see that the solution converges as the number of
base functions is increased.0.050.150.250.350.45
(a) Initial Discretization (b) Solutions at
Fig. 4.3. Three S-summations using 10,40 and 160 base functions. The solution finds the
shock correctly even if a very rough initial discretization is given. A solution with finer mesh passes
through artificial shocks.
4.2. Comparison with Godunov. A typical way to discretize the initial data is
taking the cell average, Figure 4.4 (a). The Godunov scheme solves Riemann problems
between each cells for a short amount of time \Deltat and then repeat the process until it
Piecewise Self-similar Solutions 15
reaches a given time t ? 0. In Figure 4.4 (b) we can see that the numerical solution
converges to the same limit as the S-summation, Figure 4.3 (b), as \Deltax ! 0.0.050.150.250.350
(a) Data Discretization (b) Solutions at
Fig. 4.4. Three approximations by Godunov using 1=160. The scheme is
convergent to the same limit of the S-summation. We can observe that numerical solutions are
separated near the shock and it is hard to guess where the limit is from a single computation.
Remark 4.1. (Computation time) Let N be the number of mesh points. Then
the number of operations for the S-summation is of order N since the time marching
process is not required, Theorem 3.6. The number of operations is almost independent
from the final time t ? 0. On the other hand the Godunov scheme has operations of
and the situation becomes worse if the final time t is increased.0.220.260.30.34
Fig. 4.5. A magnification of Figure 4.3 (b) near the physical shock shows that self-similarity
solutions with finer mesh passes through the middle of artificial shocks.
Remark 4.2. (Error estimate) We can clearly see that the exact solution v of
limit of the S-summation) always passes though the artificial shocks
of self-similarity solutions, Figure 4.3 (b). This property makes it possible to get a
uniform a posteriori error estimate. Figure 4.5 is a magnification of Figure 4.3 (b).
There are couple of other things we can observe here. First the sizes of artificial shocks
decrease in time with order of O(1=(t We can also observe that, even if we use
small number of base functions, we can get the physical shock very closely.
Remark 4.3. (Shock Appearance Time) In a numerical scheme the solution
is approximated by piecewise continuous functions and hence it is hard to see if a
Table
Shock Appearance time. The exact solution with initial data (4.1) blows up at 1. The time
of shock appearance can be measured by counting the base functions after the merging step.
initial number of base functions the time when the number is decreased by 2
200 T=1.0015
discontinuity represents the physical shock or not. In our scheme, as we can see from
Figure
4.2, the accumulation of base functions represents the physical shock. So if a
base function is merged by its behind one in the sense (4.2), we may conclude that a
physical shock has appeared. The physical shock appears at time in the example
since min(@ x v 0 \Gamma1. We can easily check that if (4.2) happens around the
time.
Table
4.1 shows the time when the number of initial base functions decreases.
5. General cases. The self-similarity summation has been considered under
Hypothesis (H). In this section we generalize it under Hypothesis (H1) and (H2).
5.1. General convex flux. We consider L 1 initial function u 0 which is uniformly
bounded, say \GammaA u 0 (x) B. Then the solution of (2.1) is always bounded,
Consider a convex flux,
If the flux satisfies f 00 (u) 0, we may change the variable get an equation
f satisfies (H1). Note that we include the
equality in (H1) and a piecewise linear flux can be considered.
We can easily check that a new flux,
satisfies the hypothesis (H) and h 0 be the solution of
We can easily check that
is the solution with the original flux f and initial data u 0 . Since u \GammaA, the solution
v(x; t) is positive. Now we are in the exactly same situation as in the previous sections
except the structure of the initial data. The initial data v(\Delta; 0) is not L 1 anymore. To
handle the situation we consider two special base functions with infinite mass,
Piecewise Self-similar Solutions 17
These base functions handles the transformation u A. Note that the
speed of the shock connecting the state our case.
The Self-similarity summation including these two base functions can be defined in
a similar way. We omit the detail. Figure 5.1 shows how the self-similar solution
evolves for the Burgers case. In the figure even the solution with very rough initial
discretization with only 16 base functions represents the asymptotic behavior very
correctly.
-0.4
(a) Data Discretization (b) Solutions at
Fig. 5.1. Three S-summations are displayed using 16,64 and 256 base functions. It handles
sign changing solutions correctly. This figure shows the time convergence to an inviscid N-wave.
5.2. Flux without convexity. Consider a flux with a single inflection point,
Then, under the change of variables,
the problem (2.1) is transformed to
Then the new flux h satisfies
and A is not the lower bound of the solution u(x; t) in
general, we can not expect v 0. So in this case we have to consider positive part and
negative part together. It is possible since h 0 (u) is monotone on (\Gamma1; 0) and (0; 1)
respectively. All we have to do is to consider negative base functions together with the
positive ones. Since the wave speed h 0 (u) is positive, the self-similarity summation is
defined from the right hand side as in the previous cases.
Example 5.1. Consider an inviscid thin film flow in [1],
where the initial datum is compactly supported supp(u 0
has a single inflection point under the transformation (5.4), we
get the flux It satisfies
which is not exactly same as (5.5) but has the opposite direction in the inequalities.
We do the self-similarity summation from the left hand side instead of changing the
space variable using \Gammax. Now the original problem (5.6) is transformed into
In this case the self-similarity profile (2.8) is given by,
and the corresponding base functions are,
ae
The initial data v 0 (x) converges to \GammaA as x ! \Sigma1 and we need to consider two base
functions with infinite mass,
Note again that, in our example (5.6), the infinite state is \Gamma1=3 and the shock
speed is
Numerical solutions of (5.6) with initial data,
are in
Figure
5.2. The first picture shows the initial data and the self-similarity
summation using 200 base functions. A part of it has been magnified with numerical
approximations of upwind scheme in the second picture. We can clearly see that the
solution of upwind scheme converges to the self-similarity summation. This example
shows that the self-similarity summation gives a very accurate resolution using small
number of mesh points. Furthermore, since it gives the solution without time marching
procedure, computational time is a lot smaller.
5.3. Flux with the space dependence. Since the self-similarity of the problem
(2.1) depends on the fact that the flux depends on the solution only
we have no clue how to generalize our scheme to a problem with a general space
dependent though, if the space dependence is given by
the equation is transformed to
under the change of variable
R x1=a(s)ds and our scheme can be applied.
Piecewise Self-similar Solutions 190.10.30.50.7
(a) initial data and S-summation at t=6 (b) comparison with upwind
Fig. 5.2. Flux is Picture (a) shows the initial data and the self-similarity
summation at shows that upwind converges to the self-similarity summation. 200
base functions are used in the S-summation and 800 and 4,000 meshes are used in upwind scheme.
Since the self-similarity of hyperbolic conservation laws is the one-dimensional
property, it should be possible to expand the scheme to multi-dimension problems.
Consider a 2-dimensional problem,
with a velocity vector field satisfying
Cvetkovic and Dagans [5] suggest space variables y 1
dy 1
dj
which transform (5.14) to
Problem (5.16) can be considered as a set of one-dimensional problems and, hence,
the complexity of the scheme for it is of order O(N 2 ). Since the transformation (5.15)
also has the complexity of O(N 2 ), we eventually get a scheme of O(N 2 ) for a two-dimensional
problem. In this approach each channel of the velocity vector field is
considered separately and, hence, it seems useful to channel problems.
6. Second order approximation. The scheme introduced in the previous sections
solves the problem exactly with modified initial data, and the size of the initial
error decreases in time. Even though the scheme is not good enough for the short
time behavior since the error generated by the initial discretization can be huge. Here
we add an extra structure to base functions and make the initial data discretization
to be second order. In this way we can handle general self-similarity solutions (1.7).
6.1. Modified base functions. The base function considered in the previous
sections has three indexes, say m; t; c. In this section we introduce two more indexes,
h and t. Note that there are two time indexes t and
which play different roles. We
assume 1. For the simplicity we consider under the
hypothesis (H). It can be easily generalized as we did in Section 5.
To figure out the structure of the new base function B h; t
m;t;c (x), we consider
and
Let g be the self-similarity profile, f 0 As an intermediate step we define
t;c (x) first. For
defined by
and, for it is defined by
The constant c is the center of the top self-similarity profile with time index t
and the constant x is the x-coordinate of the intersection point between two self-similarity
profiles with index t an t. We can easily see from (6.2) that c ! x for t ? 0
and
t;c (x) is well defined for since the
corresponding domain is empty. For
Now we introduce the index m ? 0 which decides the support of the base function.
c be the solution of,
Z
c
For it always has a solution. For t 0 it has a solution only
R c
t;c (x)dx.
The base function is now defined by
t;c
The self-similarity summation among these base functions can be similarly defined
using the profile g
\Delta in the domain c ! x ! x and the profile g
for x ! x. We omit the detail. We may consider the base function (3.2) as a special
case of (6.7) with
6.2. Initial discretization and the exact solution. Suppose the initial function
be a partition of the interval [A; B]. We can approximate v 0 with
self-similarity profiles over interval
which is second
order. For the Burgers case it is simply a piecewise linear approximation. The
approximation u 0 can be written as
Piecewise Self-similar Solutions 21
R
Initially the supports of base functions
are disjoint and, hence, the self-similarity summation is the usual summation. The
exact solution of the conservation law with initial data (6.8) is
We still consider the exact solution and the contraction theory implies
Remark 6.1. The initial discretization (6.8) is trivial in comparison with Step
1 in Section 4.1. It is an additional advantage when the modified base function is
used in a numerical scheme. Even though this additional structure may cause extra
complexity when it is used as an analytical tool.
Remark 6.2. (Piecewise Constant Data) In many cases initial data are given
as piecewise constant functions from the beginning. In the case an initial datum can
be considered as a summation of base functions with In Figure 6.1 we
consider the Burgers case (4.1) using base functions B h;1
m;t;c (x). We can clearly see
that these approximations represent the shock place very well. Unlike the previous
case, the solution with finer mesh always passes though the constant parts of coarse
(a) Data Discretization (b) Solutions at
Fig. 6.1. The S-summation for the modified base functions (6.7) with
constant, piecewise self-similar solution. In the figure 3 summations are displayed together using
base functions. We can observe that finer one always passes the constant parts.
Remark 6.3. (Singular Initial Data) If singular initial data are given, then extra
mesh points are usually introduced to capture the effect of the singularity of the data.
But, since our method handles initial data individually, extra mesh points are not
needed. In Figure 6.2 the Burgers equation is solved with singular initial data (a).
We use 6 modified base functions with
Remark 6.4. (Front Tracking) It is possible to consider the front tracking
method in terms of the self-similarity summation. Consider an L 1 solution of the
Burgers equation bounded by 0 u(x; t) 1. Let h(u) be the polygonal approximation
of the flux with the partition f0; 1=n; :::; 1g. So h 0 (u) is a
step function,
22 Y.-J. Kim0.050.150.250.350
(a) Singular initial data (b) Solutions at
Fig. 6.2. The scheme does not require extra meshes to handle singular initial data (a). In the
S-summation every datum is handled exactly by a base function. Only 6 base functions solves this
example.
and the self-similarity profile g(x) is also a step function,
So the values of g(x) are the breaking points of the flux h(u). We can approximate
the given initial data v 0 by taking a cell average, not just breaking points. Then the
initial discretization u 0 can be written in a from of (6.8) with
1. This is a
simplified version of the front tracking method under Hypothesis (H).
7. Conclusion. The basic idea of the method introduced in this article is to
approximate the solution of a conservation law by a self-similarity summation of
base functions. In that approach we get the exact solution in the class of functions.
This method can be easily converted into a numerical scheme and the complexity
of the scheme is of order N , not N 2 since no time marching procedure is needed.
Convergence of the scheme is now a trivial matter, Theorem 3.6 and Corollary 3.7.
The method can be used as an analytical tool. In fact the author is preparing
an article on asymptotic behavior of scalar conservation laws through this method.
Various issues appear when we apply this idea to other cases, systems or convection-diffusion
equations. The author does not have a good understanding for these cases
yet.
Acknowledgement
The author would like to thank Professor A. E. Tzavaras.
He gave the author the motivation and valuable remarks for this work. The author
also would like to thank people in IMA for all the discussions and supports.
--R
On the partial difference equations of mathematical physics
Polygonal approximations of solutions of the initial value problem for a conservation law
Regularity and large time behaviour of solutions of a conservation law without convexity
Hyperbolic conservation laws in continuum physics
Solutions in the large for nonlinear hyperbolic systems of equations
A difference method for numerical calculation of discontinuous solutions of the equations of hydrodynamics
On scalar conservation laws in one dimension.
An unconditionally stabl method for the Euler equations
A method of fractional steps for scalar conservation laws without the CFL condition
Diffusive N-waves and Metastability in Burgers equation
Large time step shock-capturing techniques for scalar conservation laws
A moving mesh numerical method for hyperbolic conservation laws
--TR | characteristics;gridless scheme;front tracking;self-similarity |
604557 | Asymptotic Size Ramsey Results for Bipartite Graphs. | We show that $\lim_{n\to\infty}\hat r(F_{1,n},\dots,F_{q,n},F_{q+1},\dots,F_{r})/n$ exists, where the bipartite graphs $F_{q+1},\dots,F_r$ do not depend on $n$ while, for $1\le i\le q$, $F_{i,n}$ is obtained from some bipartite graph $F_i$ with parts $V_1\cup V_2=V(F_i)$ by duplicating each vertex $v\in V_2$ $(c_v+o(1))n$ times for some real $c_v>0$.In fact, the limit is the minimum of a certain mixed integer program. Using the Farkas lemma we show how to compute it when each forbidden graph is a complete bipartite graph, in particular answering the question of Erdos, Faudree, Rousseau, and Schelp [Period.\ Math.\ Hungar., 9 (1978), pp. 145--161], who asked for the asymptotics of $\hat r(K_{s,n},K_{s,n})$ for fixed $s$ and large $n$. Also, we prove (for all sufficiently large $n$) the conjecture of Faudree, Rousseau, and Sheehan in [Graph Theory and Combinatorics, B. Bollobas, ed., Cambridge University Press, Cambridge, UK, 1984, pp. 273--281] that $\hat r(K_{2,n},K_{2,n}) =18n-15$. | Introduction
. Let be an r-tuple of graphs which are called for-
bidden. We say that a graph G arrows any r-colouring of E(G),
the edge set of G, there is a copy of F i of colour i for some i 2 [r] := rg. We
denote this arrowing property by G !
The (ordinary) Ramsey number asks for the minimum order of such G. Here,
however, we deal exclusively with the size Ramsey number
which is the smallest number of edges that an arrowing graph can have.
Size Ramsey numbers seem hard to compute, even for simple forbidden graphs.
For example, the old conjecture of Erd}os [6] that ^ r(K 1;n ; K 3
recently been disproved in [16], where it is shown that ^
any xed 3-chromatic graph F . (Here, K m;n is the complete bipartite graph with
parts of sizes m and n; Kn is the complete graph of order n.)
This research initiated as an attempt to nd the asymptotics of ^ r(K 1;n
xed graph F . The case treated in [17] (and [16] deals with
What can be said if F is a bipartite graph?
Faudree, Rousseau and Sheehan [11] proved that
for every m 9 if n is su-ciently large (depending on m) and stated that their
method shows that ^ r(K 1;n ; K 2;2 They also observed that K s;2n arrows the
is a cycle of order 2s; hence ^ r(K 1;n ; C 2s ) 2sn.
Let P s be the path with s vertices. Lortz and Mengersen [14] showed that
conjectured that
this is sharp for any s 4 provided n is su-ciently large, that is,
Supported by a Research Fellowship, St. John's College, Cambridge. Part of this research was
carried out during the author's stay at the Humboldt University, Berlin, sponsored by the German
Academic Exchange Service (DAAD).
y DPMMS, Centre for Mathematical Sciences, Cambridge University, Cambridge CB3 0WB, Eng-
land, O.Pikhurko@dpmms.cam.ac.uk
The conjecture was proved for 4 s 7 in [14].
Size Ramsey numbers ^
(and in some
papers F 1 is a small star) are also studied in [9, 5, 2, 3, 8, 10, 13, 12] for example.
It is not hard to see that, for xed s
This follows, for example, by assuming that s
considering K v1
s
e. The latter graph
has the required arrowing property. Indeed, for any r-colouring, each vertex of V 2 is
incident to at least s edges of same colour; hence there are at least v 2 monochromatic
K s;1 -subgraphs and some S 2 V1
s
appears in at least rtn such subgraphs of which
at least tn have same colour.
Here we will show that the limit lim exists if each forbidden
graph is either a xed bipartite graph or a subgraph of K s;btnc which 'dilates'
uniformly with n (the precise denition will be given in Section 2). In particular,
tends to a limit for any xed bipartite graph F .
The limit value can in fact be obtained as the minimum of a certain mixed integer
program (which does depend on n). We have been able to solve the MIP when
each F i;n is a complete bipartite graph. In particular, this answers a question by
Faudree, Rousseau and Schelp [9, Problem B] who asked for the asymptotics
of Working harder on the case we prove (for all su-ciently large
n) the conjecture of Faudree, Rousseau and Sheehan [11, Conjecture 15] that
where the upper bound is obtained by considering K 3;6n
fortunately, the range on n from (1.3) is not specied in [11], although it is stated
there that ^ r(K 2;2 ; K 2;2 where the upper bound follows apparently from K 6
Unfortunately, our MIP is not well suited for practical calculations and we were
not able to compute the asymptotics for any other non-trivial forbidden graphs; in
particular, we had no progress on (1.1). But we hope that the introduced method will
produce more results: although the MIP is hard to solve, it may well be possible that,
for example, some manageable relaxation of it gives good lower or upper bounds.
Our method does not work if we allow both vertex classes of forbidden graphs to
grow with n. In these settings, in fact, we do not know the asymptotics even in the
simplest cases. For example, the best known bounds on seem to
be r < 3
(Erd}os and Rousseau [10]).
Our theorem on the existence of the limit can be extended to generalized size
Ramsey problems; this is discussed in Section 5.
2. Some Denitions. We decided to gather most of the denitions in this section
for quick reference.
We assume that bipartite graphs come equipped with a xed bipartition V
embeddings need not preserve it. We denote v i
For A V 1
ASYMPTOTIC SIZE RAMSEY RESULTS 3
where F (v) denotes the neighbourhood of v in F . (We will write (v), etc., when
the encompassing graph F is clear from the context.) Clearly, in order to determine
F (up to an isomorphism) it is enough to have
This motivates the following denitions.
A weight f on a set V (f) a sequence (f A ) A22 V (f) of non-negative reals. A bipartite
graph F agrees with f if V 1
A sequence of bipartite graphs (Fn ) n2N is a dilatation of f (or dilates f ) if each Fn
agrees with f and
jF A
(Of course, the latter condition is automatically true for all A
Clearly, e(Fn
so we call e(f) the size
of f . Also, the order of f is and the degree of x 2 V (f) is
A3x
Clearly,
For example, given t 2 R>0 , the sequence (K s;dtne ) n2N is the dilatation of k s;t ,
where the symbol k s;t will be reserved for the weight on [s] which has value t on [s]
and zero otherwise. (We assume that V 1 (K s;dtne It is not hard to see that
any sequence of bipartite graphs described in the abstract is in fact a dilatation of
some weight.
We write F f if for some bipartition V there is an injection
such that for any A V 1 dominated by a vertex of V 2
is This notation is justied by the following
trivial lemma.
Lemma 2.1. Let (Fn ) n2N be a dilatation of f . If F f , then F is a subgraph of Fn
for all su-ciently large n. Otherwise, which is denoted by F 6 f , no Fn contains F .
Let f and g be weights. Assume that v(f) v(g) by adding new vertices to
letting g be zero on all new sets. We write f g if there is an injection
(g) such that
This can be viewed as a fractional analogue of the subgraph relation F G: h
embeds how much of F A V 2 mapped into G B .
The fractional -relation enjoys many properties of the discrete one. For example,
A3x
fA
A3x
The following result is not di-cult and, in fact, we will implicitly prove a sharper
version later (with concrete estimates of ), so we omit the proof.
Lemma 2.2. Let (Fn ) n2N and (Gn ) n2N be dilatations of f and g respectively.
implies that for any > 0 there is n 0 such that Fn Gm for any n n 0
)n. Otherwise, which is denoted by f 6 g, there is > 0 and n 0 such
that Fn 6 Gm for any n n 0 and m (1
An r-colouring c of g is a sequence (c A1 ;:::;A r ) of non-negative reals indexed by
r-tuples of disjoint subsets of V (g) such that
c
The i-th colour subweight c i is dened by V
c
The analogy: to dene an r-colouring of G, it is enough to dene, for all disjoint
how many vertices of G A1 [[Ar are connected, for all i 2 [r], by
colour i precisely to A i . Following this analogy, there should have been the equality
sign in (2.2); however, the chosen denition will make our calculations less messy
later.
3. Existence of Limit. Let r q 1. Consider a sequence
Assume that F i does not have an isolated vertex (that is, x
We say that a weight g arrows F (denoted by g ! F) if for any r-colouring c
of g we have F i c i for some i 2 [r]. Dene
The denition (3.1) imitates that of the size Ramsey number and we will show
that these are very closely related indeed. However, we need a few more preliminaries.
considering k a;b which arrows F if, for example,
su-ciently large, cf. (1.2). Let l be an integer greater
lg.
Proof. Let > 0 be any real smaller than d 0 . Let g ! F be a weight with v(g) > l
and
To prove the theorem, it is enough to construct g 0 ! F with
We have d(x) (^r(F)
the weight g 0 on V (g) n fxg by
We claim that g 0 arrows F. Suppose that this is not true and let c 0 be an F-free
r-colouring of g 0 . We can assume that
ASYMPTOTIC SIZE RAMSEY RESULTS 5
Dene c by
c
Anfxg d i
where we denote
A if g 0
A > 0, and
The reader can check that c is an r-colouring of g.
By the assumption on g, we have F i c i for some i 2 [r]. But this embedding
cannot use x because for we have d c i
A d i
A d i
is too small, see (2.1). But c i;A c 0
i , which is the
desired contradiction.
Hence, to compute ^ r(F) it is enough to consider F-arrowing weights on
only.
Lemma 3.2. There exists
r(F). (And we call
such a weight extremal.)
Proof. Let gn ! F be a sequence with V (gn ) L such that e(g n ) approaches
r(F). By choosing a subsequence, assume that V (g n ) is constant and
exists for each A 2 2 L . Clearly, remains to show that g ! F.
Let c be an r-colouring of g. Let - be the smallest slack in inequalities (2.2).
Choose su-ciently large n so that jg n;A gA j < - for all A 2 2 L . We have
that is, c is a colouring of gn as well. Hence, F i c i for some i, as required.
Now we are ready to prove our general theorem. The proof essentially takes care
of itself. We just exploit the parallels between the fractional and discrete universes,
which, unfortunately, requires messing around with various constants.
Theorem 3.3. Let be a dilatation of f i , i 2 [q]. Then, for all su-ciently
large n,
In particular, the limit lim
exists.
Proof. Let
We will prove that
By Lemma 3.2 choose an extremal weight g on L. Dene a bipartite graph
G as follows. Choose disjoint from each other (and from L) sets G A with jG A
6 OLEG PIKHURKO
In G we connect x 2 L to all of
G A if x 2 A. These are all the edges. Clearly,
A22 L
A22 L
as required. Hence, it is enough to show that G has the arrowing property.
Consider any r-colouring c : E(G) ! [r]. For disjoint sets
fy
(jC
0; otherwise,
that is, c is an r-colouring of g. Hence, F i c i for some i 2 [r].
Suppose that i 2 [q]. By the denition, we nd appropriate
We aim at proving that F i;n G i , where G i G is the colour-i subgraph. Partition
F A
so that
. This is possible for any A: if w
for at least one B, then
B22 L
B22 L
i;n j:
and we have
Hence, we can extend h to the whole of V mapping [ h(A)B W A;B injectively
into B.
Suppose that i 2 [q+1; r]. The relation F i c i means that there exist appropriate
L. We view h as a partial embedding of F i
into G i and will extend h to the whole of V
Take consecutively y There is B i L such that c i;B i > 0 and h((y))
. The inequality c i;B i > 0 implies that there are disjoint B j 's, j 2 [r] n fig, such
that c B1 ;:::;B r > 0. Each vertex in CB1 ;:::;B r is connected by colour i to the whole of
h((y)). The inequality c B1 ;:::;B r > 0 means that jCB1 ;:::;B r
we can always extend h to y. Hence, we nd an F i -subgraph of colour i in this case.
Thus the constructed graph G has the desired arrowing property, which proves
the upper bound.
ASYMPTOTIC SIZE RAMSEY RESULTS 7
As the
lower bound, we show that, for all su-ciently large n,
Suppose on the contrary that we can nd an arrowing graph G contradicting (3.4).
Let L V (G) be the set of vertices of degree at least d 0 n=2 in G. From d 0 njLj=4 <
e(G) < ln it follows that jLj 4l=d 0 . For A 2 2 L , dene
We have
A22 L
A22 L
Thus there is an F-free r-colouring c of g. We are going to exhibit a contradictory
r-colouring of E(G).
For each choose any disjoint sets CB1 ;:::;B r G B (indexed by r-tuples of
disjoint sets partitioning B) such that they partition G B and
This is possible because
For colour the edge fx; yg by colour j. All the
remaining edges of G (namely, those lying inside L or inside V (G) n L) are coloured
with colour 1.
There is i 2 [r] such that G i G, the colour-i subgraph, contains a forbidden
subgraph.
Suppose that be an embedding. If n is large, then
which implies that h(V 1
for
All other wA;B 's are set to zero. For A
B22 L
wA;B (jF A
For
that is, h (when restricted to V (f i )) and w demonstrate that f i c i , which is a
contradiction.
Suppose that i 2 [q +1; r]. Let V 1 consists of those vertices which are mapped
by This is a legitimate bipartition
of F i because any colour-i edge of G connects L to V (G)nL. Let y
together with c B1 ;:::;B r > 0 shows that F i g i . This contradiction
proves the theorem.
4. Complete Bipartite Graphs. Here we will compute asymptotically the size
Ramsey number if each forbidden graph is a complete bipartite graph. More precisely,
we show that in order to do this it is enough to consider only complete bipartite graphs
having the arrowing property.
Theorem 4.1. Let r 2 and q 1. Suppose that we are given t
there exist
Proof. Let us rst describe an algorithm nding extremal s and t. Some by-product
information gathered by our algorithm will be used in the proof of the ex-
tremality of k s;t ! F.
Choose l 2 N bigger than
which is the same deni-
tion of l as that before Lemma 3.1.
We claim that l > , where
1). Indeed, take any extremal f ! F
without isolated vertices. The proof of Lemma 3.1 implies that d(x) t 0 for any
necessarily v(f) > , which implies the claim.
For each integer s 2 [+1; l], let t 0
s > 0 be the inmum of t 2 R such that k s;t ! F.
Also, let s be the set of all sequences a non-negative integers with
a
For a sequence a = (a and a set
A of size
a
consist of all sequences r ) of sets partitioning
A with jA
We claim that t 0
s is sol(L s ), the extremal value of the following linear program
a2s w a over all sequences (w a ) a2s of non-negative reals
such that
a2s
w a
a i
1 The weight k s;t does not arrow F for t < sol(L s ).
To prove this, let
an r-colouring c of k s;t by
s
a
other c's are zero. It is indeed a colouring:
a2s
a
a2s
We have k s i For example, for
, we have
a2s
a
a2s
w a
s
a2s
w a
s
BS
ASYMPTOTIC SIZE RAMSEY RESULTS 9
Also, K s
some
Suppose that the claim is not true and we can nd an F-free r-colouring c of k s;t .
By the denition, c A1 ;:::;A
c
so we can pick x 2 A j and set c A1 ;:::;A increasing c :::;A j nfxg;:::;A i [fxg;::: by
c. Clearly, c remains an F-free colouring. Thus, we can assume that all the c's are
zero except those of the form c A , A 2 [s]
a
for some a 2 s . Now, retracing back our
proof of Claim 1, we obtain a feasible solution w
a
a larger objective function, which is a contradiction. The claim is proved.
Thus, t 0
is an upper bound on ^ r(F).
Let us show that in fact
We rewrite the denition of ^ r(F) so that we can apply the Farkas Lemma. The
proof of the following easy claim is left to the reader.
weights g on L such that there do not exist non-negative
reals
a with the following properties
a
A22 L
a
A
c A t
L
Let g be any feasible solution to the above problem. By the Farkas Lemma there
exist xA 0, A 2 2 L , and y i;S 0,
, such that
A
a
y i;S <
A22 L
We deduce that xA 0 (and hence considering (4.2) for
some A with jA
For each A with a := jAj > repeat the following. Let (w a ) a2a be an extremal
solution to L a . For each a 2 a , take the average of (4.2) over all A 2 A
a
, multiply
it by w a , and add all these equalities together to obtain the following.
a2a
a
wAxA
a2a
w a
a
a
y i;S
y i;S
a2a
a
w a
a s i
a
y i;S
a2a
a
w a
a i
a
(In the last inequality we used (4.1).)
Substituting the obtained inequalities on the xA 's into (4.3) we obtain
y i;S <
A22 L
gA
s
As the y i;S 's are non-negative, some of these variables has a larger coe-cient on the
right-hand side. Let it be y i;S . We have
AS
gA
A22 L
A jAj:
The last inequality follows from the fact that for any integer a > , we have 1=t 0
a
a=m u , which in turn follows from the denition of m u . Hence, e(g) > m u as required.
Corollary 4.2. Let r q 1, t
such that t i s i for be an integer sequence with
Let l 2 N be larger than lim
lim
In other words, in order to compute the limit in Corollary 4.2 it is su-cient to
consider only complete bipartite graphs arrowing Fn . It seems that there is no simple
general formula, but the proof of Theorem 4.1 gives an algorithm for computing ^ r(F).
The author has realized the algorithm as a C program calling the lp solve library.
(The latter is a freely available linear programming software, currently maintained
by Michel Berkelaar [4].) The reader is welcome to experiment with our program; its
source can be found in [15].
For certain series of parameters we can get a more explicit expression. First, let
us treat the case when only the rst forbidden graph dilates with n.
We can assume that t scaling n.
Theorem 4.3. Let 2. Then for any s
s
1).
Proof. The Problem L s has only one variable w s s 0 ;s 2 1;:::;s r 1 . Trivially, t 0
s
and the theorem follows.
In the case s we obtain the following formula (with a little bit of algebra).
Corollary 4.4. For any s
have
r
ASYMPTOTIC SIZE RAMSEY RESULTS 11
Another case with a simple formula for
r(F) is
without loss of generality we can assume that t
Theorem 4.5. Let 2. Then for any s; s
with
s
s
s
with a
Proof. Let a 2 N> and let (w a ) a2a be an extremal solution to L a . (Where we
obviously dene s Excluding the constant indices in w a ,
we assume that the index set a consists of pairs of integers (a 1 ; a 2 ) with a 1 +a
Clearly, (w 0
is also an extremal solution, where w 0
(w
Thus we can assume that w
If w then we can set w
increasing w ba 0 =2c;da 0 =2e and w da 0 =2e;ba 0 =2c by c. The easy inequality
s
s
s
a 0 b
s
a 0 b 1
implies inductively that the left-hand side of (4.1) strictly decreases while the objective
function
a2a w a does not change, which clearly contradicts the minimality of w.
Now we deduce that, for any extremal solution (w a ) a2a , we have w
unless moreover, it follows that necessarily w ba 0 =2c;da 0
which proves the theorem.
The special case r = 2 of Theorem 4.5 answers the question of Erd}os, Faudree,
Rousseau and Schelp [9, Problem B], who asked for the asymptotics of ^ r(K s;n ; K s;n ).
Unfortunately, we do not think that the formula (4.6) can be further simplied in this
case.
Finally, let us consider the case of Theorem 4.5 in more detail. It is
routine to check that Theorem 4.5 implies that ^
O(1). But we
are able to show that (1.3) holds for all su-ciently large n, which is done by showing
that a (K 2;n ; K 2;n )-arrowing graph with 18n can have at most 3 vertices
of degree at least n for all large n.
Theorem 4.6. There is n 0 such that, for all n > n 0 , we have
18n 15 and K 3;6n 5 is the only extremal graph (up to isolated vertices).
Proof. For Gn be a minimum (K 2;n ; K 2;n )-arrowing graph. We know
that e(Gn ) 18n 15 so l
us assume Ln [18].
su-ciently large n.
Suppose on the contrary that we can nd an increasing subsequence (n i ) i2N with
l n i 4 for all i. Choosing a further subsequence, assume that Ln does not
depend on i and that
exists for any A 2 2 L . The argument of
Lemma 3.2 shows that the weight g on L arrows
We have It is routine to check that at 0
a > for any a 2 [4; 18]. The
inequality (4.4) implies that, for some
jAj > 4 or A 6 S. Let J be the set of those j 2 L with g fx;y;jg > 0. We have
Consider the 2-colouring c of g obtained by letting c
disjoint
It is easy to check that neither c 1 nor c 2
contains
A22 L
Afx;yg
c i;A < (5 3:5)=2
(Recall that d g (x) 1 for all x 2 L.) This contradiction proves Claim 1.
Thus, jLn j 3 for all large n. By the minimality of Gn , spans no
edge and each x 2 sends at least 3 edges to Ln . (In particular, jLn
Thus, disregarding isolated vertices,
implies that m 6n 5, which proves the theorem.
Remark. We do not write an explicit expression for n 0 , although it should be possible
to extract this from the proof (with more algebraic work) by using the estimates of
Theorem 3.3.
5. Generalizations. If all forbidden graphs are the same, then one can generalize
the arrowing property in the following way: a graph G (r; s)-arrows F if for any
r-colouring of E(G) there is an F-subgraph that receives less than s colours. Clearly,
in the case we obtain the usual r-colour arrowing property G
This property was rst studied by Ekeles, Erd}os and Furedi (as reported in [7,
Section 9]); the reader can consult [1] for references to more recent results.
Axenovich, Furedi and Mubayi [1] studied the generalized arrowing property for
bipartite graphs in the situation when F and s are xed, G = K n;n , and r grows with
n.
We
s) to be the minimal size of a graph which (r; s)-arrows F .
Our technique extends to the case when r and s are xed whilst F grows with n (i.e.,
is a dilatation). Namely, it should be possible to show the following.
be a dilatation of a weight f and let r s be xed. Then the limit
us denote it by
We have
weights g such
that for any r-colouring c of g there is
s
such that cS f , where
c
where the sum is taken over all disjoint A
We omit the proof as the complete argument would not be short and it is fairly
obvious how to proceed.
Also, one can consider the following settings. Let F i be a family of graphs, i 2 [r].
We any r-colouring of E(G), there is
such that we have an F-subgraph of colour i. The task is to compute the minimum
size of a such G. Again, we believe that our method extends to this case as well. But
we do not provide any proof, so we do not present this as a theorem.
ASYMPTOTIC SIZE RAMSEY RESULTS 13
Acknowledgements
. The author is grateful to Martin Henk, Deryk Osthus,
and Gunter Ziegler for helpful discussions.
--R
On size Ramsey number of pathes
lp solve 3.2
A class of size Ramsey problems involving stars
The size-Ramsey number of trees
The size Ramsey number of trees with bounded degree
Size Ramsey results for paths versus stars
Asymptotic size Ramsey results for bipartite graphs
--TR | farkas lemma;mixed integer programming;bipartite graphs;size Ramsey number |
604630 | Tensor product multiresolution analysis with error control for compact image representation. | A class of multiresolution representations based on nonlinear prediction is studied in the multivariate context based on tensor product strategies. In contrast to standard linear wavelet transforms, these representations cannot be thought of as a change of basis, and the error induced by thresholding or quantizing the coefficients requires a different analysis. We propose specific error control algorithms which ensure a prescribed accuracy in various norms when performing such operations on the coefficients. These algorithms are compared with standard thresholding, for synthetic and real images. | Introduction
Multiresolution representations of data, such as wavelet basis decompositions, are a powerful
tool in several areas of application like numerical simulation, statistical estimation and data
compression. In such applications, one typically exploits the ability of these representations
to describe the involved functions with high accuracy by a very small set of coe-cients.
A pivotal concept for a rigorous analysis of their performance is nonlinear approximation: a
function f is \compressed" by its partial expansion f N in the wavelet basis, which retains only
the N largest contributions in some prescribed metric X. The most commonly used metric for
thresholding images is other norms can be considered. For a given function
f , the rate of best N-term approximation is the largest r such that N
behaves as O(N r ) as N tends to +1. In many instances a given rate is equivalent to
some smoothness property on the function f (see e.g. [11] for a survey on such results).
Several recent works have demonstrated that such a rate is also re
ected in the performance
of adaptive methods in the above mentioned applications: see [6] for numerical simulation,
[13, 12] for statistical estimation and [14, 7, 9] for data compression, as well as the celebrated
image coding algorithms proposed in [22, 21].
In the case of applications involving images, such as compression and denoising, the main
limitation to a high rate of best N-term approximation is caused by the presence of edges,
since the numerically signicant coe-cients at ne scales are essentially those for which the
wavelet support is intersected by such discontinuities. In particular, this limits the e-ciency
of high order wavelets due to their large supports. From a mathematical point of view, this
is re
ected by the poor decay - in O(N 1=2 ) - of the best wavelet N-term approximation L 2
error for a \sketchy image
where
is a bounded domain with a smooth
boundary. This re
ects the fact that this type of approximation essentially provides local
isotropic renement near the edges. Improving on this rate through a better choice of the
representation has motivated the recent development of ridgelets and curvelets in [4], which
are bases and frames having some anisotropic features, resulting in the better rate O(N 1 ).
Another possible track for such improvements is oered by multiresolution representations
which incorporate a specic adaptive treatment of edges. Tools of this type have been
introduced in the early 90's by Ami Harten (see e.g. [16, 19, 17]) as a combination of wavelet
analysis and of numerical procedures introduced by the same author in the context of shock
computations, the so-called essentially non-oscillatory (ENO) and subcell resolution (ENO-
reconstruction techniques (see e.g. [18, 20]). Formally speaking, the multiresolution
representations proposed by Harten have the same structure as the standard wavelet trans-
forms: a sequence v L of data sampled at resolution 2 L is transformed into (v
where the v 0 corresponds to a sampling at the coarsest resolution and each sequence d k represents
the intermediate details which are necessary to recover v k from v k 1 . However, the main
dierence is that the basic interscale decomposition/reconstruction process which connects v k
with (v k 1 ; d k ) is allowed to be nonlinear.
As we shall see in image examples, the nonlinear process allows a better adapted treatment
of singularities, in the sense that they do not generate so many large detail coe-cients as in
standard wavelet transforms. On the other hand, it raises new problems in terms of stability
which need to be addressed in order to take full advantage of this gain in sparsity. Let us
explain this point in more detail. In most practical applications, the multiscale representation
(v processed into a new (^v
d L ) which is close to the original one,
in the sense that for some prescribed discrete norm k k,
where the accuracy parameters are chosen according to some criteria specied
by the user. Such processing corresponds for example to quantization in the context of data
compression or simple thresholding in the context of statistical estimation, adaptive numerical
simulation or nonlinear approximation. Applying the inverse transform to the processed
representation, we obtain a modied sequence ^ v L which is expected to be close to the original
discrete set v L . In order for this to be true, some form of stability is needed, i.e., we must
require that
lim
l
In the case of linear multiresolution representations, the stability properties can be precisely
analyzed in terms of the underlying wavelet system: for example if this system constitutes
a Riesz basis for L 2 , stability can be re-expressed as a norm equivalence between v L and
(v constants that do not depend on L. However, these techniques are no
more applicable in the setting of non-linear representations which cannot anymore be thought
as a change of basis. In the nonlinear case, stability can be ensured by modifying the algorithm
for the direct transform in such a way that the error accumulated in processing the values of
the multi-scale representation remains under a prescribed value. This idea was introduced by
Harten for one dimensional algorithms in [16, 3, 1].
The aim of this paper is to introduce and analyze two-dimensional multiresolution processing
algorithms that ensure stability in the above sense, and to test them on image data. We
consider here the tensor product approach, which is also used in most wavelet compression
algorithms. While this approach limitates the compactness of the representation for edges
which are not horizontal or vertical, it inherits the simplicity of the one dimensional techniques
in any dimension. This allows us to develop our algorithms in the same spirit as in
the one dimensional case, yet pointing out new ways of truncating the data and proving explicit
a-priori error bounds in the L 1 , L 2 and L 1 metric, in terms of the quantization and
thresholding parameters. Let us mention that closely related algorithms have been recently
introduced in [5] based on a non-linear extension of the so-called lifting scheme.
The paper is organized as follows: we recall in x2 the discrete framework for multiresolution
introduced by Harten, and we focus on two specic cases corresponding to point-value and
cell-average discretizations for which we recall the linear, ENO and ENO-SR reconstruction
techniques. The tensor-product error control algorithms are discussed in x3, where we also
discuss the sharpness of the error bounds and the connexions with standard wavelet thresholding
algorithms. These strategies are then practically tested on images in x4. In a rst
set of experiments, we test the reliability and sharpness of our a-priori error bounds for a
prescribed target accuracy. In a second set of experiments, we apply an alternate processing
strategy based on a-posteriori error bounds in order to further reduce the number of preseved
coe-cients while remaining within this prescribed accuracy. In the last set of experiments,
we compare the compression performances of linear and nonlinear multiresolution decomposi-
tions, using either error control or standard thresholding, on geometric and real images. The
results can roughly be summarized as follows: nonlinear decompositions clearly outperform
standard linear wavelet decompositions for geometric images, with an inherent limitation due
to the use of a tensor product strategy, but brings less improvement for real images due to
the presence of additional texture. This raises both perspectives of developping appropriate
non-tensor product representations and of separating the geometric and textural information
in the image in order to take more benet from these new representations.
Acknowledgment: we are grateful to the anonymous reviewers for their constructive suggestions
in improving the paper.
2. Harten's framework for Multiresolution
2.1. The general framework
The discrete multiresolution framework introduced by Harten essentially relies on two pro-
cedures: decimation and prediction. From a purely algebraic point of view, decimation and
prediction can be considered simply as interscale operators connecting linear vector spaces,
that represent in some way the dierent resolution levels (k increasing implies more res-
olution), i.e.,
(a)
(b)
While the decimation D k 1
k is always assumed to be linear, we do not x this constraint on
the prediction P k
. The basic consistency property that these two operators have to satisfy
is
is the identity operator in V k 1 , which in particular implies that D k 1
k has full
rank. If v the vectors derived by iterative
decimation
the prediction error. Clearly v k
and (v contain the same information, but e k expressed as a vector of V k contains some
redundancy since the consistency relation (2) implies that e k is an element of the null space,
(D k 1
Keeping the algebraic viewpoint, we can remove this redundancy by introducing
an operator G k which computes the coordinates of e k in a basis of N (D k 1
k ), and another
that recovers the original redundant description of e k from its non-redundant
part, i.e., such that e These ingredients allow us to write a purely algebraic
description of the direct and inverse multiresolution transform as follows:
These algorithms which connect v L with its multiscale representation (v
can be viewed as a simple change of basis in the case where the prediction operator is linear.
F
F
@
@
@R
@
@
@I
Figure
1: Denition of transfer operators
In practice, the construction of the operators D k 1
k 1 is based on two fundamental
tools: discretization and reconstruction. The discretization operator D k acts from a non-discrete
function space F onto the space V k and yields discrete information at the resolution
level specied by a grid X k . It is required that D k be a linear operator. The reconstruction
operator R k , on the other hand acts from V k to F and produces an approximation to a function
F from its discretized values: the function R k D k f . A basic consistency requirement of
the framework is that
Given sequences of discretization and reconstruction operators satisfying (5), it is then possible
to dene the decimation and prediction operators according to
This denition is schematically described in gure 1. Observe that there seems to be an
explicit dependence of D k 1
k on R k but it is easy to prove that the decimation operator is in
fact totally independent of the reconstruction process whenever the sequence of discretization
is nested, i.e., it satises
This property implies, in essence, that all the information contained in the discretized data
at a given resolution level is also included in the next (higher) one. For a nested sequence
of discretization, the description of D k 1
k as R k D k 1 , is just a formal one (useful in some
contexts), and in practice one does not resort to the reconstruction sequence to decimate
discrete values.
The description of the prediction step as P k
opens up a tremendous number
of possibilities in designing multiresolution schemes. The reconstruction process is the
step. The subband ltering algorithms associated to biorthogonal wavelet decompositions
correspond to particular cases where this process is linear. In contrast, nonlinear reconstruction
operators will lead naturally to nonlinear multiresolution representations which cannot
anymore be thought as a change of basis. For the sake of completeness and ease of reference,
in the remainder of this section we give a very brief description in one dimension of more
restricted frameworks associated to point-value and cell-average discretizations, together with
the corresponding linear, ENO and ENO-SR reconstruction operators. The left-out details in
the entire section (and much more) can be found in [1, 17, 2, 3].
multiresolution representations have also recently been proposed in
[5] based on a non-linear extension of the lifting scheme. In particular, the nonlinear ENO
and ENO-SR representations that will be used in this paper could be introduced with the
lifting terminology in place of the above general framework in which they were originally in-
troduced. On the other hand the error control strategies that we develop in x3 are specically
adapted to the restricted frameworks of point-value and cell-average discretizations (and different
from the general \synchronization" strategy of [5]). These restricted frameworks have
a particularly simple interpretation within Harten's general framework.
2.2. Point-value multiresolution analysis in 1D
Let us consider a set of nested grids:
where N 0 is some xed integer. Consider the point-value discretization
is the space of sequences of dimension
we obtain
(D
Notice that N (D k 1
can be dened as follows:
A reconstruction procedure for the discretization operator dened by (8) is given by any
operator R k such that
which means that
(R k
therefore, (R k
should be a continuous function that interpolates the data
f k on X k .
Let us change slightly the notation and denote by I k (x;
such an interpolatory reconstruction
of the data
f k . The prediction operator can be computed as follows:
and the direct and inverse transforms (3) and (4) take the following simple form:
Notice that in the point-value framework, the detail coe-cients are simply interpolation errors
at the odd points of the grid that species the level of resolution.
A natural way of dening a linear interpolation operator is as follows: some integer m > 0
being xed, we consider the unique polynomial p i of degree 2m 1 such that p i
simply dene
I
This choice coincides with the so-called Lagrange interpolatory wavelet transform. Note that
as m increases, the interpolation process has higher order accuracy, i.e. the details d k
i will be
smaller if f is smooth on [x k 1
]. On the other hand, the intervals [x k 1
larger with m so that a singularity will aect more detail coe-cients.
Non-linear essentially non-oscillatory (ENO) interpolation techniques, which were rstly
introduced in [20], circumvent this drawback: the idea is to replace in (15) the polynomial
by a polynomial p
selected among fp in order to avoid the in
uence
of the singularity. The selection process is usually made by picking the \least oscillatory"
polynomial using numerical information on the divided dierences of f at the points x k 1
.
In the present paper we have been using the so called hierachical selection process which is
detailed in [1]. Once this selection is made, we thus dene
I
Such a process still produces large details d k
when a singularity is contained in the interval
In order to reduce further the interpolation error, subcell resolution methods
were introduced in [18] as an elaboration of ENO interpolation. The idea is rst
to detect the possible presence of singularities by
agging those i such that p
i.e. such that the selection process tends to escape the interval [x k 1
such
i+1 intersect at a single point a 2 [x k 1
we identify this point as
the singularity and replace in (16) the polynomial p
i by the piecewise polynomial function
which coincides with p
Note that such a process is
better tted to localize the singularities of the rst order, i.e. jumps in f 0 , rather than the
discontinuities of f . Such discontinuities, which correspond to edges in image processing, are
better treated in the cell-average framework that we now describe.
2.3. Cell-average multiresolution analysis in 1D
With the same nested grids structure of last section, we dene the discretization
is the space of absolutely integrable functions in [0; 1]. It is su-cient to consider
weighted averages
since these contain information on f over [0; 1]. Thus, V k
is the space of sequences with N k components. Additivity of the integral leads to the following
decimation step:
thus, the prediction error satises
and the operators G k and E k can be dened as follows
A reconstruction operator for the discretization in (17) is any operator R k
satisfying
(D
(R k
That is, R k
k (x) has to be a function in L 1 ([0; 1]) whose mean value on the ith cell coincides
with
In one dimension, the simplest way to construct R k is via the \primitive function".
Dene the sequence fF k
i g on the k-th grid as
The function F (x) is a primitive of f(x), and the sequence fF k
corresponds to a discretization
by point-values of F (x) on the k-th grid. Let us denote by I k (x; F k ) an interpolatory
reconstruction of F (x), and dene
(R k
dx I k (x; F k
It is easy to see that D k R (R k
With these denitions the direct transform (3) and its inverse (4) can be described as
follows:
where
d
dx
The linear, ENO and ENO-SR techniques for cell-averages are simply derived from the corresponding
techniques in the point-value framework, using the above primitivation. Note that
jumps in f get transformed into jumps in F 0 , so that the ENO-SR process is now well tted
to localize the discontinuities.
Remark 2.1 In practice, the primitive function is used as a design tool and it is never
computed explicitly: All calculations are done directly on the discrete cell-values ([16, 1]). In
particular, for linear techniques, we obtain the biorthogonal wavelets decompositions corresponding
to the case where the dual scaling function ~
' is the box function [0;1] (if in addition
we have for the accuracy parameter, we obtain nothing but the Haar system).
3. MR-based compression schemes with error-control
Multiresolution representations lead naturally to data-compression algorithms. Probably the
simplest data compression procedure is truncation by thresholding, which amounts to setting
to zero all detail coe-cients which fall below a prescribed, possibly level dependent, threshold,
Thresholding is used primarily to reduce the \dimensionality" of the data. A more elaborate
procedure, which is used to reduce the digital representation of the data is quantization, which
can be modeled by
where round [] denotes the integer obtained by rounding. For example, if jd k
then we can represent
i by an integer which is not larger than 32 and commit a
maximal error of 4. Observe that jd k
and that in both cases
While the thresholding procedure is usually applied only to the scale coe-cients, the quantization
process is also applied to the coarse level representation.
After the application of a particular compression strategy, such as truncation or quantiza-
tion, we obtain a compressed multiresolution representation M
d L g, where
represents the compression parameters, i.e., applying truncation
by thresholding). Obviously, M
f is close to M
g. Applying the inverse
multiresolution transform to the compressed representation, we obtain ^
f , an
approximation to the original signal
f L . The fundamental question is that of estimating, and
thus being able to control, the error k
f L k in some prescribed norm. It often happens
that we are given a target accuracy, i.e., a maximum allowed deviation , thus our goal is to
obtain a compressed representation M
f such that
We can formulate this goal in various ways :
f L k from the errors kd k
f L k from the thresholds k .
For linear multiscale representations corresponding to wavelet decompositions, error estimates
of the type we are looking for are typically derived by using the stability properties of
the underlying wavelet system. In the nonlinear framework, an error-control strategy was rst
proposed by Harten in [16] to directly accomplish (29), namely modify the direct transform in
such a way that the modication allows us to keep track of the cumulative error and truncate
accordingly.
Let us explain in a nutshell the idea of the error-control algorithm (EC henceforth). As
a rst step, the sequences
f 0 are computed from
f L by iterative application of the
decimation operator. Then, we start at the coarsest level and dene ^
f 0 by applying some
perturbation process (thresholding or quantization) on
is now to dene the processed details ^
d k and processed kth scale representation ^
f k in an
intertwined manner from coarse to ne scales: for L, the processed details ^
represent a perturbation of the prediction error
involving the processed data
at the coarser scale ^
while the sequence of processed data is simultaneously computed
according to ^
d k .
In the following, we shall focus on the point-value and cell-average frameworks for which
there is a rather natural specic way to dene the processed details ^
d k . In one dimension, the
strategy can be schematically described as follows:
d L g
Here
given by the one-dimensional decimation operator correspnding
to the chosen framework, i.e.
2i for point-values and
cell-averages. On the other hand [
stands for the kth step of
the 1D error control algorithms presented in [1], which we recall here for the sake of clarity:
~
~
(a) (b)
Figure
2: algorithms in 1d. (a) Point-values. (b) Cell-averages
A key point is that the values ^
f k are precisely the values computed by the corresponding
inverse multiresolution transform M 1 at each resolution level, i.e. (14) in the point-value
framework and (25) in the cell-average framework. In other words:
A second key point is that, in both algorithms, the coe-cients ~
coincide with the true
prediction errors at odd points e k
(i.e., the scale coe-cients in the
direct encoding) only when
In the EC algorithm, the details need to be dened in such a way that, after compression
takes place, the error accumulated at each renement step, i.e., jj
can be controlled.
To get a direct control on jj
f k jj, the details ~
must contain relevant information on the
error committed in predicting the true values at the kth level, i.e.
k , from the computed
In the point-value framework, there is no error at even points, and only the prediction
errors at the odd points needs to be controlled. It is not hard to deduce from gure 2-(a) (see
also [1]) that one has
In the cell average setting, the compressed details ^
d k are dened by applying the processing
strategy on the half-dierences ~
adjacent points. This is su-cient to ensure control on the prediction errors at each location
on a given resolution level, because from gure 2-(b) one can easily deduce that [1]
Relations (31) in the point-value framework, and (32) in the cell-average framework, express
the compression error at the kth level in terms of the compression error at the previous level
plus a quantity that is directly related to the thresholds k . These basic relations lead to the
one-dimensional error-bounds in [1].
product EC algorithms in two dimensions
Needs a small paragraph on why we choose tensor product (simple, everybody
else does that etc,,,)
The modied algorithms for the direct and inverse multiresolution transforms we shall
describe below can be viewed as two-dimensional tensor product extensions of the one dimensional
algorithms in [16, 1]. For each resolution level, one acts with the one-dimensional
decimation operator on each row of the 2d array
ij , and computes intermediate values, say
2 . This array, which has N k rows but only N k 1 columns, is decimated again column
by column with the one-dimensional operator to obtain
These values are stored until
the bottom level is reached. Then computed values ^
2 and processed details ^
d k are
computed in an intertwined manner using only the one-dimensional algortihms described in
the previous section. In this context, and similarly to the tensor-product wavelet transform,
we obtain three types of details ^
d k (3).
To be more explicit, we give next a schematic description of the typical 2D algorithm for
the EC-direct transform,
Algorithm 1: Modied Direct transform
The intermediate values
f k are discarded once the algorithm concludes, and the outcome of
the EC-direct transform is
d
d
d L (3)g
The inverse multiresolution transform can be described using the 1d operator inver1d, which
denotes the k step of algorithms (14) in the point-value framework and (25) in the cell average
framework.
Algorithm 2: Inverse transform
As in the 1D case, the inverse transform (decoding) satises
It should be remarked that, due to the intertwined structure of the error control algorithm,
the details, ~
will be dierent if one acts rst on the rows or rst on the columns. In any
case, and even though the computed details might be dierent, they are of the same order of
magnitud.
1.1 Explicit Error Bounds for the EC strategies in two dimensions
We shall consider the following norms:
In what follows we shall see that it is possible to estimate the error between the original signal
f L and the signal obtained from decoding its compressed representation, i.e. ^
f L
from either the dierences jj ^
or the threshold parameters k .
Proposition 3.2 Given a discrete sequence
f L , with the modied direct transform for the
pointvalue framework in 2-d (Algorithm 1) we obtain a multiresolution representation M M
f L
such that if we apply the inverse transform (Algorithm 2) we obtain ^
f L satisfying:
(jjj ~
where
Proof: From the 1d relations (31) we obtain
hence
i;j (2)
and
i;j (2)
Then
(j
and we obtain (34). Since N
(j
which proves (35) and (36).
Corollary 3.3 Consider the error control multiresolution scheme described in proposition
3.2, and a processing strategy for the detail coe-cients such that
we obtain that
In particular, if we assume that jj
we obtain
Proposition 3.5 Given a discrete sequence
f L , with the modied direct transform for the
cell average framework in 2-d (Algorithm 1) we obtain a multiresolution representation M M
f L
such that if we apply the inverse transform (Algorithm 2) we obtain ^
f L satisfying:
Proof: As a consequence of the relations in (32) we obtain
and
i;j (2)
i;j (2)
i;j (2)
i;j (2)
i;j (2)
and
i;j (2)
Hence
max(j ~
i;j (2)
i;j (2)
i;j (2)
thus
and we deduce (40).
Also from (44), we deduce
4j
and
hence we have proved (41).
To prove (42), we obtain from (43)
and
d
hence
2and the pronof is concluded.
Corollary 3.6 Consider the error control algorithm for the cell average framework described
in proposition 3.5 and a processing strategy for the detail coe-cients such that
3:
Then
In particular, if we assume that jj
we obtain
3.4 Remarks and comparison with standard thresholding
The results of Propositions 3.2 and 3.5 can be viewed as a-posteriori bounds on the compression
error, since they involve the ~
d k which themselves depend on the processing strategies applied
at the coarser levels. In practice, this becomes important; the a-posteriori bound can be
evaluated at the same time the compression process is taking place, and the a-posteriori
bound coincides in some cases with the exact compression error ((34), (35), (36) and (42)).
On the other hand, Corollary 3.3 and 3.6 provide a-priori bounds on the compression error.
Consider for example the L 2 -error in the cell-average framework and assume, for simplicity,
that
f 0 . According to Corollary 3.6, we can ensure an error of order by choosing a
sequence
and requiring that the processed details ^
3:
The truncation strategies given by (26) and (27) both satisfy (37). Note, however, that with
either one of these two strategies, the error bound jj ~
is already not sharp
since it corresponds to the worst case scenario, where all the dierences j( ~
are close to k , while in practice these dierences are often zero, or much smaller than k .
Therefore, we expect in this case that the a-priori bound in is over-estimated and much
larger than the a-posteriori bound given by Proposition 3.5, which in this particular case is
precisely the compression error k
This fact is examined in detail in the next section,
where it is used as the starting point for the design of new truncation strategies.
Another important issue is how to choose the dependence in k of the truncation parameter
k in order to optimize the compression process. By \optimize", we mean here to minimize
the number of resulting parameters for a given prescribed error. This number corresponds to
the number of non-discarded coe-cients in the case of thresholding, and to the total number
of bits in the case of quantization.
In the case of linear multiresolution representations associated to wavelet decompositions,
the answer to this question is now well understood: the threshold or truncation parameters
should be normalized in accordance with the error norm that one is targeting. More precisely,
if the detail coe-cients d represent the expansion of a function in a wavelet basis
and if one is interested in controlling the L p norm with a minimal number of parameters, then
these coe-cients should be perturbed according to
d d k L
d d jk k L p
where is xed independently of . We refer to [11] and [7] for such type of results. It is
easily seen that in the case of two-dimensional cell-average multiresolution, this corresponds
to taking
in Corollary 3.6. This suggests to use similar normalizations in the setting of
nonlinear multiresolution algorithms with error control, although there is no guarantee now
that such a strategy will be optimal in the above sense. This issue will be revisited in the next
section through numerical testing.
As an example consider again the L 2 -error in the cell-average discretization. In the linear
case, an optimal choice for k is of the form
According to Corollary 3.6, we can take ensure a global L 2 error less than by
taking
. However, as already remarked, we may expect that the L 2 error is
actually much less than , so that one can still lower the number of compression parameters
by raising while the a-posteriori L 2 -error bound, which in this case is precisely the exact
remains below the tolerance .
The fact that the error-control strategy permits to monitor the compression error at each
resolution level, through the a-posteriori bounds, allows us to design new processing strategies
that aim at reducing as much as possible the number of resulting parameters, while keeping,
at the same time, the total compression error below a specied tolerance. An example of such
strategy is given in the next section.
--R
A Surprisingly E
Nonlinear Wavelet Transforms for Image Coding via Lifting Scheme
Adaptive wavelet algorithms for elliptic operator equations - Convergence rate to appear in Math
Tree approximation and optimal encoding.
Biorthogonal bases of compactly supported wavelets.
On the importance of combining wavelet-based nonlinear approximation with coding strategies
Ten Lectures on Wavelets.
Nonlinear approximation.
Unconditional Bases are Optimal Bases for Data Compression and for Statistical Estimation
Wavelet shrinkage: asymptotia
Analysis of low bit rate image coding.
Analyses Multir
Discrete multiresolution analysis and generalized wavelets.
Multiresolution representation of data II: General framework.
ENO schemes with subcell resolution.
Multiresolution representation of cell-averaged data
Uniformly high order accurate essentially non-oscillatory schemes III
An image multiresolution representation for lossless and lossy compressio.
Embedded image coding using zerotrees of wavelet coe-cient
--TR
Uniformly high order accurate essentially non-oscillatory schemes, 111
ENO schemes with subcell resolution
Multiresolution representation of data
Multiresolution Based on Weighted Averages of the Hat Function I
Multiresolution Based on Weighted Averages of the Hat Function II
Adaptive wavelet methods for elliptic operator equations
--CTR
F. Arndiga , R. Donat , P. Mulet, Adaptive interpolation of images, Signal Processing, v.83 n.2, p.459-464, February
S. Amat , J. Ruiz , J. C. Trillo, Compression of color image using nonlinear multiresolutions, Proceedings of the 5th WSEAS International Conference on Signal Processing, Robotics and Automation, p.11-15, February 15-17, 2006, Madrid, Spain
S. Amat , J. C. Trillo , P. Viala, classical multiresolution algorithms for image compression, Proceedings of the 5th WSEAS International Conference on Signal Processing, Robotics and Automation, p.1-5, February 15-17, 2006, Madrid, Spain
Sergio Amat , S. Busquier , J. C. Trillo, Nonlinear Harten's multiresolution on the quincunx pyramid, Journal of Computational and Applied Mathematics, v.189 n.1, p.555-567, 1 May 2006 | stability;multi-scale decomposition;non linearity;tensor product |
604689 | Open-loop video distribution with support of VCR Functionality. | Scalable video distribution schemes have been studied for quite some time. For very popular videos, open-loop broadcast schemes have been devised that partition each video into segments and periodically broadcast each segment on a different channel. Open-loop schemes provide excellent scalability as the number of channels required is independent of the number of clients. However, open-loop schemes typically do not support VCR functions. We will show for open-loop video distribution how, by adjusting the rate at which the segments are transmitted, one can provide VCR functionality. We consider deterministic and probabilistic support of VCR functions: depending on the segment rates chosen, the VCR functions are supported either 100% of the time or with very high probability. For the case of probabilistic support of PLAY and Fast-forward (FF) only, we model the reception process as a semi-Markov accumulation process. We are able to calculate a lower bound on the probability of successfully executing FF actions. | Introduction
1.1 Classication
VoD systems can be classied in open{loop systems [12] and closed{loop systems
[10,16]. In general, open loop VoD systems partition each video into
smaller pieces called segments and transmit each segment on a separate channel
at its assigned transmission rate. Those channels may be logical, implemented
with an adequate multiplexing. All segments are transmitted periodically
and indenitely. The rst segment is transmitted more frequently than
later segments because it is needed rst in the playback. In open{loop systems
there is no feedback from the client to the server, and transmission is completely
one{way. In closed-loop systems, on the other hand, there is a feedback
between the client and the server. Closed{loop systems generally open a new
unicast/multicast stream each time a client or a group of clients issues a re-
quest. To make better use of the server and network resources, client requests
are batched and served together with the same multicast stream.
Open-loop systems use segmentation in order to reduce the network bandwidth
requirements, which makes them highly scalable because they can provide
Near Video on Demand (NVoD) services at a xed cost independent of the
number of users. In this paper, we will show how open-loop NVoD schemes
can support VCR functions, which are dened as follows:
PLAY Play the video at the basic video consumption rate, b;
PAUSE Pause the playback of the video for some period of time;
SF/SB Slow forward/Slow backward: Playback the video at a rate equal to
some period of time. We have Y S <
FF/FB Fast forward/Fast backward: Playback the video at a rate equal to
some period of time. We have X F > 1.
1.2 Related Work
Most VoD systems do not support VCR functions. It is assumed that users
are passive and keep playing the video from the beginning until the end without
issuing any VCR function. However support of VCR functions makes a
VoD service much more attractive. Most research on interactive VoD focuses
on closed-loop schemes [1,6,15,13]. To support VCR functions such as Fast-Forward
all these schemes serve the client who issues a FF command via
a dedicated unicast transmission, referred to as contingency channel. When the
client returns into PLAY state, (s)he joins again the multicast distribution.
It is obvious that such a solution is not very scalable since it requires separate
contingency channels and also explicit interaction with the central server.
Thus, open-loop schemes are particularly well suited when: a) the number of
users grows large, or b) the communication medium has no feedback channel,
which is the case in satellite or cable broadcast systems.
Very little work has been done to support VCR functions in open-loop VoD
schemes [2,4,8,14]. Except for the paper by Fei et al. [8], all the other schemes
only consider PAUSE or discrete jumps in the video. Fei et al. propose a
scheme called \staggered broadcast" and show how it can be used together
with what they call \active buer" management to provide limited interac-
tivity. In staggered broadcast, the whole video of duration L is periodically
transmitted on N channels at the video consumption rate b. Transmission of
the video on channel i starts t later than channel i 1. Depending
on the buer content and the duration of the VCR action, the VCR
action may be possible or not. In the case that the VCR action is not possible,
it is approximated by a so-called discontinuous interactive function where the
viewing jumps to the closest (with respect to the intended destination of the
interaction) point of the video that allows the continuous playout after the
VCR action has been executed.
The big dierence between the related work and our scheme is that up to now,
the support of VCR functions either required a major extension of the transmission
scheme (e.g. contingency channels) or was very restricted (e.g. staggered
broadcast). We will demonstrate the feasibility of deterministic support
of VCR functions in open-loop VoD systems by increasing the transmission
rate of the dierent segments. While this idea looks very straightforward, it
has been, to the best of our knowledge, never proposed before.
The rest of the paper is organized as follows. We rst describe the so-called tailored
transmission scheme, then discuss how to adapt this scheme to support
VCR functions. For the case of PLAY and FF user interactions we develop an
analytical model that allows the computation of a lower bound on the probability
that a user interaction can be successfully executed and then provide
some quantitative results. The paper ends with a brief conclusion.
2.1
Introduction
Many dierent open-loop NVoD schemes have been proposed in the literature;
for a survey see [12]. These schemes typically dier in the way a video is
partitioned into segments and can be classied mainly in three categories:
Schemes that partition the video in dierent length segments and transmit
each segment at the basic video consumption rate [9,19];
Schemes that partition the video in equal-length segments and decrease the
transmission rate of each segment with increasing segment number [3];
Hybrid schemes that combine the two above methods [14,20].
In the following, we will present in more detail the scheme called tailored transmission
scheme that was proposed by Birk and Mondri [3] and is a generalization
of many of the other open-loop NVoD schemes previously described.
2.2 Tailored Transmission Scheme
The base version of the tailored transmission scheme works as follows. A video
is partitioned into N equal-length segments. Each segment is transmitted periodically
and repeatedly on its own channel. A client who wants to receive
a video starts by listening to one, more, or all channels and records these
segments.
We shall need the following notation:
s i denote the time the client starts recording segment
denote the time the client has entirely received segment
denote the time the client starts viewing segment
r i denote the transmission rate of segment i [bits/sec];
D denote the segment size [bits];
b denote the video consumption rate [bits/sec].
To assure the continuous playout of the video we require that each segment
is fully received before its playout starts, i.e. v i w i . Given a segment size
the transmission rate r i of segment i must
satisfy the following condition to assure a continuous playout of the video:
(v
If the client starts recording all segments at the same time, i.e. s
and Mondri have shown (Lemma 1 in [3]) that the transmission rate will be
minimal and is given as
r min
(w
Without loss of generality, we may assume that t
is the duration of a segment. Then, r min
and the
total server transmission bandwidth is
R min
Figure
1 illustrates the tailored transmission scheme for the case of minimal
transmission rates. The client starts receiving all segments at time t 0 . The
shaded areas for each segment contain exactly the content of that segment as
received by the client who started recording at time t 0 . A client is not expected
to arrive at the starting point of a segment; instead a client begins recording
at whatever point (s)he arrives at, and stores the data for later consumption.
Therefore, the startup latency of is the scheme corresponds to the segment
duration D=b.00000011111100011100000000111111110011001111000001111111111000011111111000001111111111
Segments
Time
Client joins
Fig. 1. An example of the tailored transmission scheme with minimal transmission
rates.
3 How to Support VCR Functions
Given the base scheme of the tailored transmission with its minimal transmission
rates, we will show how to adapt (increase) the segment transmission rates
to support VCR functions. To convey the main idea, we will limit ourselves
rst to the case where the only VCR function possible is FF. In fact, the FF is
the only VCR action that \accelerates" the consumption of the video, which
possibly can lead to a situation where the consumption of the video gets ahead
of the reception of the video. We present a solution that makes sure that any
FF command issued can be successfully executed. The other user interactions
such as SF, SB, FB or PAUSE can be accommodated by buering at the client
side. From now on, we therefore consider only two states: PLAY and FF.
We make the following two central assumptions:
The client has enough disk storage to buer the contents of a large portion
of the video;
The client has enough network access and disk I/O bandwidth to start
receiving the N segments at the same time.
The trend for terminal equipments appears to be that more and more storage
capacity is available. Actually, there already exist products that meet the
above assumptions. An example is the digital video recorder by TiVo [18] that
can store up to 60 hours of MPEG II video and, connected to a satellite feed,
can receive transmissions at high data-rates.
However, for the case where the assumption on storage does not hold, we also
know how to support VCR functions: the idea will remain the same, only the
individual segment transmission rates required will be higher. The scheme we
propose may be adapted to this situation. Note that the trade-o between the
storage capability of the client and segment transmission rates for the case of
NVoD has already been explored by Birk and Mondri [3].
3.1 Deterministic Support
Whenever a client issues a FF command, the video is viewed at a playout rate
than the normal rate, i.e. the consumption of the video occurs at
a rate equal to X F b and each segment will be consumed after D
bXF units
of time instead of D=b units of time in case of PLAY. As a consequence, the
viewing times of all segments not yet viewed will be \advanced" in time. To
obtain a deterministic guarantee that every FF command issued during the
viewing of a video can be executed, we consider the worst case scenario where
the client views the whole video in FF.
i denote the time the client starts viewing segment i, given that (s)he
has viewed segments mode. We can compute the v FF
i as
If the client starts recording all segments at the same time, i.e. s
compute, similar to (2), the transmission rate r FF
i that allows unrestricted FF
interactions as
r FF
(v FF
XF D
1 The playout and therefore VCR actions do not start before segment 1 has been
entirely received; we therefore have v FF
If we assume that t D=b, the expression simplies to
r FF
3.2 Probabilistic Support for FF
In the previous subsection, we have computed the minimal transmission rates
r FF
i such that all the FF interactions issued can be realized. We have considered
the worst case scenario where the client views the whole video in FF
mode. While a client might do so, we think that it is much more likely that
the viewing of a video will alternate between PLAY and FF modes (and possibly
other VCR actions). We will in the following use a model for the viewing
behavior where a user strictly alternates between PLAY and FF. We refer to
this behavior as S-FF (for Simple FF).
Our goal is to support FF interactions with high probability while transmitting
each segment at a rate lower than r FF
. To this purpose we dene the rates r I
as follows:
The server transmits the segments i Ng at a rate r I
where A is the rate increase factor, with 1 A X F , and r min
i is
computed in (2);
The server transmits segment 1 at rate r I
still because the playout
does not start before segment 1 has been entirely received.
Analytical Model for the S-FF Model
In this section we will compute a closed-form lower-bound on the probability
that a segment is successfully consumed by the client. Segment i is successfully
consumed by the client if segment available to him/her before the
consumption of segment i has been completed; otherwise we will say that
the consumption of segment i has failed. A failure is resolved once the next
segment is entirely available to the client. It is worth pointing out that failures
may occur both in mode PLAY and in mode FF, as shown on Figure 2.
We will assume that the client alternates between both modes of consump-
tion. More precisely, we introduce two independent renewal sequences of rvs
fS P (n)g n and the fS FF (n)g n , where S P (n) and S FF (n) will represent the
duration of the n-th PLAY and FF periods, respectively.
F
F
Playout limit
in FF mode
in P mode
time
bits
Consumption curve
Fig. 2. Failures occurring in PLAY and FF modes.
For modeling purposes, and also because we believe this assumption corresponds
to a reasonable behavior of the client, we will assume that the remaining
duration of a PLAY or FF period when a failure occurs is resumed when
the next segment is available to the client. This corresponds, for instance, to
the situation where the client wants to reach a particular point in the video
or avoid a particular scene, regardless of the failures that (s)he may encounter
while viewing the video.
In order to ensure a probabilistic support for FF (cf. Section 3.2) recall that
segment i is transmitted at rate r I
Therefore,
the i-th segment will be entirely available to the client at time w
The continuous playout of segment i requires that at the viewing time v i , this
segment has been entirely received, that is v i w i . Segment
will fail if this inequality does not hold. The continuous playout of the video
requires that all segments be on time, namely,
Recall that v since the client cannot start viewing the rst segment
before it has been entirely received.
The number L of segments on time is given by
where 1A stands for the indicator function of the event A, from which we
deduce the mean number of segments on time
Denote by R(t) the number of bits of the video which have been consumed by
the client in [v 1
Computing P(v i w i ) in closed-form for all i is not an easy task. Indeed, it is
related to computing the distribution of the length of a busy period in a
uid
queue fed by a Markov-Modulated Rate Process. In the present paper, we will
content ourselves with the derivation of an elementary lower bound.
To derive this lower bound, we consider the semi-Markov accumulation process
which is constructed as follows: during a PLAY period Q(t)
continuously increases with the rate b and during a FF period it continuously
increases with the rate bX F . More precisely, for t > 0,
with T n :=
By convention T
By construction of Q(t) and R(t) it is obvious that (see Figure 3)
Observe that both processes fR(t); t w 1 g and fQ(t); t w 1 g would be
identical in the absence of failures. We see from (6) and the denition (5) that
which implies that
Hence, cf. (4),
For the transmission scheme we described in Section 3.2, the segment arrival
times are given by w but the analysis above actually holds
for any reception schedule of segments given by a sequence fw
Failure
SFF (2)
SP (2)
Fig. 3. Comparison of Q(t) and R(t).
In Section 5, we present results for determining P(Q(T ) < x), for any T
and x. These results are actually obtained for any semi-Markov accumulation
process with a nite state-space (see Section 5.2). When S P (n) and S FF (n)
are exponentially distributed random variables (rvs) with respective means
1= P and 1= FF , we can apply the formulas in Section 5.3. First, use (16)
with use the formulas for q ij (x) (the density
of the distribution of Q, conditionally on the start/end states) with
and . The probabilities P(Q(T are then obtained by numerical
integration.
5 Semi-Markov Accumulation Process
In this section, we develop a framework for evaluating the workload distribution
generated in a given time-interval by a semi-Markov accumulation process
with an arbitrary (but nite) state-space.
After dening the process (Section 5.1), we show that the Laplace transforms
of the sought distributions satisfy the linear system of equations (15). Finally,
we apply the formula to the case of a two-state continuous-time Markov process
(Section 5.3), where the Laplace transform can be inverted to obtain the
density of the distribution.
5.1 Denition
We rst construct formally the accumulation process from a semi-Markov
process. Kg be a nite state-space. Let
be a sequence of i.i.d. rvs, for each
fZ(n)g n be a homogeneous discrete-time Markov chain on the state-space
The semi-Markov process fX(t); t 0g is dened jointly with a sequence
of jump times as
with nonnegative rv.
The accumulation process Q(t) is such that while the process X(t) is in state i,
Q(t) accumulates at a constant rate r i . Formally, fQ(t); t 0g is constructed
as follows: set
This construction is illustrated in Figure 4. The upper part shows the evolution
of the discrete-time Markov chain Z(n), and of X(t). The lower part displays
Q(t) as a function of the jump times T n . The accumulation rates are such that
5.2 Distribution of Q(t)
Let Q i;r (T ) denote the quantity accumulated in [0; T ) given that
and T In other words, the process X starts in state i with a residual
time r in this state. Similarly, denote Q i;S i (T ) the same quantity, but
given that distributed according to the
total sojourn time distribution (i.e., as if a transition to state i had occurred
at time 0).
Depending on the problem to be solved, one may be interested in the distribution
of Q i;S i (T ) or that of Q
S i is the forward recurrence time of
. The latter corresponds to the case where the semi-Markov process fX(t)g
Fig. 4. Construction of the accumulation process
is stationary. The common procedure for computing these distributions is to
compute that of Q i;r (T ) for an arbitrary r, and then integrate with respect to
the proper distribution.
We are therefore interested in the distribution of Q i;r (T ), jointly with that
of X(T ), namely P(Q i;r (T ) We shall actually compute the
Laplace-Stieltjes Transform (LST)
The computation below may be seen as a generalization of the analysis developed
by Cox and Miller in [5, x9.3] for alternating renewal processes (i.e.
First, if r T , then no jump occurs before time T , and since
In that case,
On the other hand, if r < T , then at least one jump occurs in the time-interval
conditioning on the state reached after the rst jump (i.e. Z(1))
then using the stationarity and independence of the underlying sequences, we
have
We now compute the Laplace transform of ^
T () with respect to T . With
the help of (10)-(11), we obtain
re
r
dT
dT
A relation involving only the rvs Q i;S i (T ) is obtained from (12) by integrating
both sides with respect to r, considered to be distributed as S i . Let S i (r)
denote the distribution function of S i and let S
be its LST.
Introduce also the notation
dT
Then, we have
dT
This is a system of linear equations from which the required Laplace transforms
can be computed. To see this better, dene the matrices
diag (S
diag
denotes the mm diagonal matrix with elements
Then, (14) rewrites as
K=L
The matrix I SP is invertible because the spectral radius of SP is less than
This follows by application of a standard
bound on the spectral radius ([11, Cor. 6.1.5]): (SP)
)j. This is less than one in the specied domain, from well
known properties of Laplace transforms.
Once the matrix K is computed, other initial conditions of the process fX(t);
may be investigated.
For instance, if the residual sojourn time in state i is r, then the distribution
is obtained using (12), that is
If the residual sojourn time in state i is given by e
the forward recurrence
time of S i (in other words, if fX(t); t 0g is stationary), then integrating
(12) gives, with obvious notation
f
dT
Remark: A simple extension of this derivation shows that the accumulation
process may be generalized by replacing the constant-rate process by any
stationary process with independent increments. The formulas above hold with
the term \r replaced by some i () characteristic of the process (see [7,
Eq. (7.3') p. 419]). For instance, for the Poisson process with rate r,
process with drift r and variance 2 ,
5.3 Application to a Two-State Markov Accumulation Process
In this section, we address the case of a two-state, continuous-time Markov
process, with innitesimal generator
(T ) denote the quantity accumulated during the interval [0; T ) when
accumulation rates in states 1 and 2 are r 1 and r 2 , respectively. In distribution,
we have
Computing the distribution of Q r1 ;r 2 (T ) is therefore reduced to computing the
distribution of Q 1;0 (T ), which is the visit time in state 1 during the interval
apply formulas of Section 5.3.
We assume that the residual time in the initial state has the same distribution
as the total sojourn time. Observe that due to the memoryless property of the
exponential distribution, S i and e
have the same distribution. The relevant
matrices are:
Using (15), we obtain
The last step is to invert the Laplace transform K ij (; ) with respect to
and . From the denition (13), this will give the density of the distribution
of Q(T ).
The inversion can be performed using general rules and tables for Laplace
transforms (see e.g. [17]). Inverting with respect to is straightforward, because
we have a rational function of degree 1 in . We obtain:
dq P(Q
For the inversion with respect to , we use in particular the following properties
a
I 1
aT
aT
a
I 1
aT
(i.e. inverse of the
Laplace transform g(s) at point t), I n () is the modied Bessel function of the
rst kind and order n (see e.g. [17, p. 7]) and - a (t) is the Dirac function at
point a.
for x 0. We nally nd, with
s
x
s
x
In order to obtain the distribution functions P(Q(T
the Laplace transforms K ij (; )= should be inverted. This leads to more
involved series which shall not be reproduced here.
6 Numerical Results
We have applied the bound in (7) to a video of length 2 hours = 7200 sec.
We have varied the segment size from 200 sec to 800 sec. The number N of
segments varies inversely from 36 to 9. The playout factor for FF is
This is a standard value for VCRs, also used in other papers. We consider two
dierent duration ratios (that is: PLAY periods last 2 times,
resp. 5 times longer than FF periods). The parameters chosen are detailed in
Table
1. We have displayed in this table the average \natural" consumption
rate of the video, given by
FF
Table
Parameters of the numerical experiments.
1= P 1= FF A b N =b
In order to compare the performance of our scheme for videos of dierent
lengths, we have measured the probability of success:
The results should depend on how the natural rate b N compares to the rate
increase factor A. If b N =b < A, then the law of large numbers will force the
\natural" consumption curve Q(t) (and therefore R(t)) to lie below the playout
limit with large probability. Note that this eect may be long to appear if b N =b
is close to A. If b N =b > A, then the converse eect appears. In that case, it
also turns out that the actual curve R(t) records a large number of failures.
Another eect may kick in: the probability that a failure occurs within segment
may depend on i. First, the time between w 1 and w 2 (=D (2=A 1)=b) is
smaller than the typical inter-arrival time between segments w
This may give a signicant advance of data, and with few (large) segments,
may result in a large success probability. On the other hand, when b N =b < A,
the rst segments tend to be vulnerable to
uctuations in the consumption
rate and have a smaller success probability. But if b N =b > A, the rst segments
are more likely to be played out without failures than later ones.
The results are reported in Figure 5. The curve for 1=
exhibits the poorest performance. This was expected, since b N =b > A
in this case. Note however that the accuracy of the bound is not good for small
Probability
of
success
Segment duration
Play 60, FF=30, A=1.9
Play 45, FF=9, A=1.4
Play 60, FF=30, A=1.8
Play 60, FF=30, A=1.7
Play 45, FF=9, A=1.3
Fig. 5. Lower bounds on the probability of success E [L]=N for segments of the whole
video.
values of the segment length (see Table 2), and that the probability of success
is actually larger than 80%.
The other curves exhibit a probability of success larger than 85% for 1=
(which is just slightly larger than b N =b), and larger
than 95% for the three other sets of parameters. The curves with
almost coincide. The experiments show that choosing a parameter
A only slightly larger than the expected consumption rate of the user, coupled
with su-ciently large segment sizes, achieves a very reasonable success
probability.
The accuracy of the bound (7) is not good in relative terms, as demonstrated
in
Table
2. In this table, the bound is compared with values obtained by
simulating a million replications of a playout of the entire video. The relative
accuracy improves when D increases; this is explained by the fact that the
law of large numbers has more eects when segments are longer.
The accuracy is however su-cient to assess the e-ciency of the rate increase
technique, and may be used to optimize the parameter A, in a compromise
between the probability of success and the bandwidth requirements. Such an
optimization is outside the scope of this paper.
Table
Comparison of the lower bound on the success probability (B) with simulations (S);
1= 9.
200 0.9602 0.9837 200 0.5226 0.8099
300 0.9794 0.9898 300 0.6132 0.8110
700 0.9989 0.9993 700 0.8716 0.9021
Conclusions
We have shown how by increasing the segment transmission rates for the tailored
transmission scheme one can provide either deterministic or probabilistic
support of user interactions. Since the FF action is the most \challenging" one
to support, we restricted our analysis to a viewing behavior where only PLAY
and FF are allowed. We rst derived deterministic guarantees for satisfying
all possible FF actions. Since the deterministic guarantees were based on the
pessimistic assumption that the user watches the whole video from start to
end in FF mode, we then dened a model for the viewing behavior (S-FF
model) that consists of the user alternating between the PLAY and the FF
modes.
For the S-FF model, we derived an analytic expression for a lower bound on
the success probability. The reception of the segments is modeled as a semi-Markov
accumulation process that allows the computation of the amount of
video data received. While supporting VCR functions (and in particular FF)
requires an increase in the segment transmission rates, our results indicate
that this increase remains \moderate". The analytical results obtained for the
S-FF are still pessimistic ones in the sense that a user who executes not only
PLAY and FF but also actions such as PAUSE of SF will reduce the rate
at which the video is consumed compared to the case of the S-FF model. In
future extensions of this research, we shall exploit the theoretical formalism
for accumulation processes that we have developed in this paper in order to
handle various user's behavior and other VCR functions.
--R
Providing unrestricted VCR functions in multicast video-on-demand servers
The role of multicast communication in the provision of scalable and interactive video-on-demand service
Tailored transmissions for e-cient near-video-on- demand service
The Theory of Stochastic Processes.
Channel allocation under batching and VCR control in video-on-demand systems
Stochastic Processes.
Providing interactive functions through active client bu
Supplying instantaneous video-on-demand services using controlled multicast
Matrix Analysis.
The split and merge protocol for interactive video on demand.
Multicast delivery for interactive video-on-demand service
Multicast with cache (Mcache): An adaptive zero-delay Video-on-Demand service
Schaum's Outline of Theory and Problems of Laplace Transforms.
Pyramid broadcasting for video on demand service.
--TR
Matrix analysis
Channel allocation under batching and VCR control in video-on-demand systems
The Split and Merge Protocol for Interactive Video-on-Demand
Providing Interactive Functions through Active Client-Buffer Management in Partitioned Video Multicast VoD Systems
An Efficient Implementation of Interactive Video-on-Demand
Supplying Instantaneous Video-on-Demand Services Using Controlled Multicast
Multicast Delivery for Interactive Video-On-Demand Service
Providing Unrestricted VCR Functions in Multicast Video-on-Demand Servers | interactivity;semi-Markov accumulation process;stochastic bounds;video on demand |
605023 | Challenges of component-based development. | It is generally understood that building software systems with components has many advantages but the difficulties of this approach should not be ignored. System evolution, maintenance, migration and compatibilities are some of the challenges met with when developing a component-based software system. Since most systems evolve over time, components must be maintained or replaced. The evolution of requirements affects not only specific system functions and particular components but also component-based architecture on all levels. Increased complexity is a consequence of different components and systems having different life cycles. In component-based systems it is easier to replace part of system with a commercial component. This process is however not straightforward and different factors such as requirements management, marketing issues, etc., must be taken into consideration. In this paper we discuss the issues and challenges encountered when developing and using an evolving component-based software system. An industrial control system has been used as a case study. | Overview
ABB is a global electrical engineering and technology
company, serving customers in power generation,
transmission and distribution, in industrial automation
products, etc. The ABB group is divided into companies,
one of which, ABB Automation Products AB, is
responsible for development of industrial automation
products. The automation products encompass several
families of industrial process-control systems including
both software and hardware.
The main characteristics of these products are
reliability, high quality and compatibility. These features
are results of responses to the main customers
requirements: The customers require stable products,
running around the clock, year after year, which can be
easily upgraded without impact on the existing process. To
achieve this, ABB uses a component-based system
approach to design extendable and flexible systems.
The Advant Open Control System (OCS) (ABB, 2000)
is component-based to suit different industrial applications.
The range includes systems for Power Utilities, Power
Plants and Infrastructure, Pulp and Paper, Metals and
Minerals, Petroleum, Chemical and Consumer Industries,
Transportation systems, etc. An overview of the Advant
system is shown in Figure 1.
Business System
Information
Management
Station
Operator Station
Process
Controller
Process
Controller
Process
Controller
Figure
1. An overview of the conceptual architecture of
the Advant open control system.
Advant OCS performs process control and provides
business information by assembling a system of different
families of Advant products. Process information is
managed at the level of process controllers. The process
controllers are based on a real-time operating system and
execute the control loops. The Operator Station (OS) and
Information Management Station (IMS) gather and
product information, while the business system
provides analysis information for optimization of the entire
processes. Advant products use standard and proprietary
communication protocols to satisfy real-time requirements.
Advant OCS therefore includes information
management functions with real-time insight into all
aspects of the process controlled. Advant Information
Management has an SQL-based relational database
accessible to resident software and all connected
computers. Historical data acquisition reports, versatile
calculation packages and an application programming
interface (API) for proprietary and third party applications
are examples of the functionality provided. Advant
components have access to process, production and quality
data from any Process Control unit in a plant or in an
Intranet domain.
Designing with Reuse
Designing with reuse of existing components has many
advantages (Sommerville, 1996). The software
development time can be reduced and the reliability of the
products increased. These were important prerequisites for
the Advant OCS development.
Advant OCS products can be assembled in many
different configurations for use in various branches of
industry. Specific systems are designed with the reuse of
Advant OCS products and other external products. This
means customers get a tailor-made system that meets their
needs. External products and components can be used
together with the Advant OCS due to the openness of the
system. For example a satellite communication component,
which is used to transmit data from the offshore station to
the supervision system inland, can be integrated with the
Advant OCS.
The Advant system architecture is designed for reuse.
Different products such as Operator and Information
Management Stations are used as system components in
assembling complete systems. The two operator station
versions, Master OS and MOD OS are used in building
different types of operator applications.
Scalability
Advant OCS can be configured in a multitude of ways,
depending on the size and complexity of the process. The
initial investment can consist of stand-alone process
controllers and, optionally, local operator stations for
control and supervision of separate machines and process
sections. Subsequently, several process controllers can be
interconnected and, together with central operator and
information management stations build up a control
network. Several control networks can be interconnected to
give a complete plant network which can share centrally
located operator, information and engineering workplaces.
Openness
The system is further strengthened by the flexibility to
add special hardware and software for specific applications
such as weighing, fixed- and variable-speed motor drives,
safety systems and product quality measurements and
control in for example the paper industry. Second- and third
party administrative, information, and control can also be
easily incorporated.
Cost-effectiveness
The step-by-step expansion capability of Advant OCS
allows users to add new functionality without making
existing equipment obsolete. The system's self-configuration
capability eliminates the need for engineers
to enter or edit topology descriptions when new stations are
physically installed. New units can be added while the
system is in full operation. With Advant OCS, system
expansion is therefore easy and cost-effective.
Reusable Components
The Advant OCS products are component based to
minimize the cost of maintenance and development. Figure
2 shows the component architecture of the operator station
assembled from components.
Figure
2. The operator station is assembled from
Operator Station
Functional Components
Object Management
OMF
OS-Base functions
Real-time operating
System
User Interface
Component
Library
Standard Operating
System
components.
The operator station consists of a specific number of
functional components and of a set of standard Advant
components. These components use the User Interface
System (UIS) component. Object Management Facility
(OMF) is a component which handles the infrastructure and
data management. OMF is similar to CORBA (OMG,
2000) in that it provides a distributed object model with
data, operation and event services. The UxBase component
provides drivers and other specific operating system
functions. Helper classes for strings, lists, pointers, maps
and other general-purpose classes are available in the
C++_complib library component. The components are built
upon operating systems, one, a standard system(such as
Unix or Windows), and the other a proprietary real-time
system.
To illustrate different aspects of component-based
development and maintenance, we shall further look at two
components:
Object Management Facility (OMF), a business type of
component with a high-level of functionality and a
complex internal structure;
- C++_complib is a basic and a very general library
component.
Object Management Facility (OMF)
OMF (Nbling et al., 1999) is object-oriented middle-ware
for industrial process automation. It encapsulates real-time
process control entities of almost every conceivable
description into objects that can be accessed from
applications running on different platforms, for example
Unix and Windows NT. Programming interfaces are
available for many languages such as C, C++, Visual Basic,
Java, Smalltalk and SQL while interfaces to the IEC 1131-3
(IEC, 1992) process control languages are under
development. OMF is also adapted to Microsoft
Component Object Model (COM) via adapters and another
component called OMF COM aware. The adapters for OPC
(OLE for Process Control) (OPC, 1998) and OLE
Automation are also implemented. Thanks to all these
software interfaces, OMF makes process and production
data available to the majority of computer programmers
and users i.e. even to those not necessarily involved in the
industrial control field. For instance, it is easy to develop
applications in Microsoft Word, Excel and Access to access
process information. OMF has been developed for
demanding real-time applications, and incorporates
features, such as real-time response, asynchronous
communications, standing queries and priority scheduling
of data transfers. On one side OMF provides industry-standard
interfaces to software applications, and on the
other, it offers interfaces to many important communication
protocols in the field including MasterNet, MOD DCN,
TCP/IP and Fieldbus Foundation. These adapters make it
possible to build homogeneous control systems out of
heterogeneous field equipment and disparate system nodes.
OMF reduces the time and cost of software
development by providing frameworks and tools for a wide
range of platforms and environments. These utilities are
well integrated into their respective surroundings, allowing
developers to retain the tools and utilities they prefer to
work with.
C++_complib
C++_complib is a class library that contains general-purpose
classes, such as containers, string management
classes, file management classes, etc. The C++_complib
library was developed when no standard libraries, such as
were available on the market. The
main purpose of this library was to improve the efficiency
and quality, and promote the uniform usage of the basic
functions.
C++_complib is not a component according to the
definition in (Szyperski, 1998), where a component is a unit
of composition deployed independently of the product.
However, in a development process C++_complib is treated
in a very similar way as binary components with some
restrictions, such dynamic configuration.
Experience
The Advant system is a successful system and the main
reasons for its success are its component-based architecture
giving flexibility, robustness, stability and compatibility,
and effective build and integration procedures. This type of
architecture is similar to product line architectures (Bass et
al., 1999). Some case studies (Bosch, 1999) have shown
that product-line architectures are successfully applied in
small- and medium-sized enterprises although there exists a
number of problems and challenges issues (organization,
training, information distribution, product variants, etc.
The Advant experience shows that applying of product-line
architectures can be successful for large organizations.
However, the cost of achieving these features has been
high. To suit the requirements of an open system, new ABB
products have always to be backward compatible. It would
have been easier to develop a new system that not required
being compatible with the previous systems. A guarantee
that the system is backward compatible is a warranty that
an existing system will work with new products and this
makes the system trustworthy.
Development with large components which are easy to
reuse increases the efficiency significantly as compared
with reusing a smaller component that could have been
developed in-house at the same cost as its purchase price.
Advant OCS products are examples of large components
which have been used to assemble process automation
systems.
3 Different Reuse Challenges
Component generality and efficiency
Reuse principles place high demands on reusable
components. The components must be sufficiently general
to cover the different aspects of their use. At the same time
they must be concrete and simple enough to serve a
particular requirement in an efficient way. Developing a
reusable component requires three to four times more
resources than developing a component, which serves a
particular case (Szyperski, 1998). The fact that the
requirements of the components are usually incomplete and
not well understood (Sommerville, 1996) brings additional
level of complexity. In the case of C++_complib, the
situation was simpler, because the functional requirements
were clear. It was relatively easy to define the interface,
which was used by different components in the same way.
The situation was more complicated with complex
components, such as OMF. Although the basic concept of
component functionality was clear, the demands on the
component interface and behavior were different in
different components and products. Some components
required a high level of abstraction, others required the
interface to be on a more detailed level. These different
types of requirements have led to the creation of two levels
of components: OMF base, including all low-level
functions, and OMF framework, containing only a higher
level of functions and with more pre-defined behavior and
less flexibility. In general, requirements for generality and
efficiency at the same time lead to the implementation of
several variants of components which can be used on a
different abstraction level. In some specific cases, a
particular solution must be provided. This type of solution
is usually beyond the object-oriented mechanisms, since
such components are on the higher abstraction level.
System Evolution
Long-life products are most often affected by evolution
of different kinds:
- Evolution of system requirements, functional and non-
functional. A consequence of a continually competitive
market situation is a demand for continually improved
system performance. The systems controlling and
servicing business, industrial, and other processes
should permanently increase the efficiency of these
processes, improve the quality of the products,
minimize the production and maintenance costs etc.
- Evolution of technology related to different domains.
The advance of technology in the different fields in
which software is used requires improved software.
The improvements may require a completely new
approach to or new functions in software.
- Evolution of technology used in software products.
Evolution in computer hardware and software
technology is so fast that an organization
manufacturing long-life and complex products must
expect significant technology changes during the
product life cycle. From the reliability and risk point of
view, such organizations prefer not to use the latest
technology, but because of the demands of a highly
competitive market, are forced to adopt new
technology as it appears. The often unpredictable
changes which must be made in products cause
delivery delays and increased production costs.
- Evolution of technology used for the product
development. As in the case of products themselves,
new technology and tools used in the development
process appear frequently on the market.
Manufacturers are faced with a dilemma - to adopt the
new technology and possibly improve the development
process at the risk of short term higher costs (for
training and migration), or to continue using the
existing technology and thereby miss an opportunity to
lower development costs in the long run.
- Evolution of society. Changes in society (for example
environmental requirements, or changes in the
relations between countries - as in the EU) can have a
considerable impact on the demands on products (for
example new standards, new currency, etc.) and on the
development process (relations between employers and
employees, working hours, etc.
Business Changes. We face changes in government
policies, business integration processes, deregulation,
etc. These changes have an impact on the nature of
business, resulting, for examples, in a preference for
short-term planning rather than long-term planning and
more stringent time-to-market requirements.
Organizational Changes. Changes in society and
business have direct effects on business organizations.
We can see a globalization process, more abrupt
changes in business operations and a demand for more
flexible structures and management procedures, just-
in-time deliveries of resources, services and skills.
These changes require another, fast and flexible
approach to the development process.
All these changes have a direct or indirect impact on the
product life cycle. The ability to adapt to these changes
becomes the crucial factor in achieving business success
(Brown, 2000). In the following sections we discuss some
of these changes and their consequences in the
development process and product life cycle.
Evolution of Functional Requirements
The development of reusable components would be
easier if functional requirements did not evolve during the
time of development. As a result of new requirements for
the products, new requirements for the components will be
defined. The more reusable a component is, the more
demands are placed on it. A number of the requirements
coming from different products, may be the same or very
similar, but this is not necessarily the case for all
requirements passed to the components. This means that the
number of requirements of reusable components grow
faster than of particular products or of a non-reusable piece
of software. The relation between component requirements
and the requirements from the products is expressed with
the following equation:
RC0 denotes direct requirements of the component, Rpi
requirements of the products Pi , ai impact factors to the
component and RC is the total number of the component
requirements.
To satisfy these requirements the components must be
updated more rapidly and the new versions must be
released more frequently than the products using them.
The process of the change of components is more
dynamic in the early stage of the components lives. In that
stage the components are less general and cannot respond
to the new requirements of the products without being
changed. In later stages, their generality and adaptability
increase, and the impact of the product requirements
become less significant. In this period the products benefit
from combinatorial and synergy effects of components
reuse. In the last stage of its life, the components are
getting out-of-date, until they finally become obsolete,
because of different reasons: Introduction of new
techniques, new development and run-time platforms, new
development paradigms, new standards, etc. There is also a
higher risk that the initial component cohesion degenerates
when adding many changes, which in turn requires more
efforts.
This process is illustrated in Figure 3. The first graph
shows the growing number of requirements for certain
products and for a component being used by these products.
The number of requirements of a common component
grows faster in the beginning, saturates in the period [t0-
t1], and grows again when the component features become
inadequate. Some of the product requirements are satisfied
with new releases of products and components, which are
shown as steps on the second graph. The component
implements the requirements by its releases, which
normally precede the releases of the product if the
requirements originated from the product requirements.
Accumulated Requirements
Component
Product P2
Product P1
t-0 t-1 Time
Requirements satisfied in the releases
Component
Product P2
Product P1
t-0 t-1 Time
Figure
3. To satisfy the requirements the reusable
component must be modified more often in the beginning
of their life.
Indeed this was the case with both components we are
analyzing here: New functions and classes were required
from C++_complib, and new adapters and protocol support
were required from OMF. The development time for these
components was significantly shorter than for products:
While new versions of a product are typically released
every six months, new versions of components are released
as least twice as often. After several years of intensive
development and improvement, the components became
more stable and required less effort for new changes. In that
period the frequency of the releases has been lowered, and
especially the effort has been significantly lower.
New efforts for further development of components
appeared with migration of products on different platforms
and newer platforms versions. Although the functions of
the products and components did not changed significantly
a considerable amount of work was done on the component
level.
Migration Between Different Platforms
During their several years of development, Advant
products have been ported to different platforms. The
reasons for this were the customer requirement, that the
products should run on specific platforms, and general
trends in the growing popularity of certain operating
systems. Of course, at the same time, new versions and
variants of the platform already used appeared, supporting
new, better and cheaper hardware. The Advant products
have migrate through different platforms: Starting on Unix
HP-UX 8.x and continuing trough new releases (HP-UX
9.x, 10.x ), they have been ported to other Unix platforms,
such as Digital Unix, and also to complete different
platforms, such as Open VMS and Windows NT family
(NT 3.5, NT 4.0 and Windows 2000). The products have
been developed and maintained in parallel. The challenge
with this multi-platform development was to keep the
compatibility between the different variants of the products,
and to maintain and improve them with the minimal efforts.
As an important part of the reuse concept was to keep
the high-level components unchanged as far as possible, it
was decided to encapsulate the differences between
operating systems in low-level components. This concept
works, however, only to some extent. The minimal activity
required for each platform is to rebuild the system for that
platform. To make it possible to rebuild the software on
every platform, standard-programming languages C and
C++ have been used. Unfortunately, different
implementations of the C++ standard in different
compilers, caused problems in the code interpretation and
required the rewriting of certain parts of the code. To
ensure that standard system services are available on all
platforms, the POSIX standard has been used. POSIX
worked quite well on different Unix platforms, but much
less so on Windows NT. The second level of compatibility
problem was Graphical User Interface (GUI). The main
dilemma was whether to use exactly the same GUI on
every platform, or to use the standard "look and feel" GUI
for each platform. This question applied particularly on NT
in relation to Unix platforms. Experience has shown that it
is not possible to give a definitive answer. In some cases it
was possible to use the same GUI and the same graphical
packages, but in general, different GUIs were implemented.
The main work regarding the reuse of code on different
platforms was performed on low-level components, such as
UxBase and OMF. While UxBase provides different low-level
packages for every platform (for example different
drivers), OMF capsulated the differences directly in the
code using conditional compilation. OMF itself is designed
in such a way that it was possible to divide the code into
two layers. One layer is specific for each operating system,
and the other layer, with the business logic, is implemented
for all of the supported platforms. Reuse issues on different
platforms for C++_complib were easier, strictly the
package contains general algorithms, which are not
depending on specific operating system. Some problems
appeared however, related to different characteristics of
compilers on different platforms.
Compatibility
One of the most important factors for successful
reusability is the compatibility between different versions
of the components. A component can be replaced easily or
added in new parts of a system if it is compatible with its
previous version. The compatibility requirements are
essential for Advant products, since smooth upgrading of
systems, running for many years, is required. Compatibility
issues are relative simple when changes introduced in the
products are of maintenance and improvement nature only.
Using appropriate test plans, including regression tests,
functional compatibility can be tested to a reasonable
extent. More complicated problems occur when new
changes introduced in a reusable component eliminate the
compatibility. In such a case, additional software, which
can manage both versions, must be written.
A typical example of such an incompatible change, is a
change in the communication protocol between OMF
clients and servers. All different versions of OMF must be
able to talk to each other to make the system flexible and
open. It is possible to have different combinations of
operating systems and versions of OMF and it still works.
This has been solved with an algorithm that ensures the
transmission of correct data format. If two OMF nodes
have the same version, they talk in their native protocol.
If an old OMF node talks with a new, the new OMF is
responsible for converting the data to the new format, this
being designated RMIR ("receiver makes it right"). If a
new OMF sends data to an older, the older OMF can not
convert the data since it is unaware of the new protocol. In
this case the newer OMF must send in the old protocol
format, SMIR ("sender makes it right"). This algorithm
builds on that fact all machines know about each other and
that they also know what protocol they talk. However, if an
OMF-based node does not know of the other node then it
can always send in a predefined protocol referred to as
well known format. All nodes do recognize this protocol
and can translate from it. This algorithm minimizes the
number of data conversions between the nodes.
In the case of C++_complib the problems with
compatibility were somewhat different. New demands on
the same classes and functions appeared because of new
standards and technology. One example is the use of C++
templates. When the template technology became
sufficiently mature, the new requirements were placed for
C++_complib: All the classes were to be re-implement as
template classes. The reason for this was the requirement
for using basic classes in a more general and efficient way.
Another example is Unicode support in addition to ASCII-
support. These new functions were added by new member-
functions in the existing classes and by adding new classes
using the inheritance mechanism for reusing the already
existing classes. The introduction of the same functions in
different format have led to additional efforts in reusing
them. In most of the cases the old format has been replaced
by new one, with help of simple tools built just for this
purpose. In some other cases, due to non-proper planning
and prioritizing the time-to-market requirements, both old
and new formats have been used in the same source
modules which have led to lower maintainability and to
some extend to lower quality of the products.
Development Environment
When developing reusable components several
dimensions of the development process must be considered:
- Support for development of components on different
platforms;
- Support for development of different variants of
components for different products;
- Support for development and maintenance of different
versions of components for different product versions.
Independent development of components and products.
To cope with these types of problems, it is not sufficient
to have appropriate product architecture and component
design. Development environment support is also essential.
The development environment must permit an efficient
work in the project - editing, compiling, building,
debugging and testing. Parallel and distributed development
must also be supported, because the same components are
to be developed and maintained at the same time on
different platforms. This requires the use of a powerful
Configuration Management (CM) tool, and definition of an
advanced CM-process.
The CM process support exists on two levels. First on
the source-code level, where source-code files are under
version management and binary files are built. The second
level is the product integration phase. The product built
must contain a consistent set of the component versions.
For example, Figure 4 shows an inconsistent set of
components. The product version P1-V2 uses the
component versions C1-V2 and C2-V2. At the same time
the component version C1-V2 uses the component version
C2-V1, an older version. Integrating different versions of
the same component may cause unpredictable behavior of
the product.
C2 Version V1
C2 Version V2
Figure
4. An example of inconsistent component
integration.
Another important aspect of CM in developing reusable
components is Change Management. Change management
keeps track of changes on the logical level, for example
error reports, and manages their relations with implemented
physical changes (i.e. changes of documentation, source
code, etc. Because change requests (for example
functional requirements or error reports) come from
different products, it is important to register information
about the source of change requests. It is also important to
relate a change request from one product to other products.
The following questions must be answered: What impact
can the implemented change have on other products? If an
error appears in one product, does it appear in other
products? Possible implications must be investigated, and if
necessary, the users of the products concerned must be
informed.
The development environment designated Software
Development Environment (SDE) (Crnkovic, 1997) is used
in developing Advant products. It is an internally-built
program package which encapsulates different tools, and
provides support for parallel development. The CM tool,
based on RCS (Tichy, 1985), provides support for all CM
disciplines, such as change management, works pace
management, build management, etc. SDE runs on different
platforms, with slightly modified functions. For example,
the build process is based on Makefiles and autoconf on
Unix platforms, while Microsoft Developer Studio with
additional Project Settings is used on Windows NT. The
main objective of SDE is to keep the source-code in one
place under version control. Different versions of
components are managed using baselines, and change
requests. Change requests are also under version control,
which gives a possibility of acquiring information useful
for project follow-up, for every change from registration to
implementation and release (Crnkovic and Willfr, 1998).
Independent Component Development
Component development independent of the products
gives several advantages. The functions are broken down in
smaller entities that are easier to construct, develop and
maintain. The independent component development
facilitates distributed development, which is common in
large enterprises. Development of components
independently of product or other component development
introduces also a number of problems. The component and
product test become more difficult. On the component
level, a proper test environment must be built, which often
must include a number of other components or even maybe
the entire product.
Another problem is the integration and configuration
problem. A situation shown on Figure 4 must be avoided.
When it is about complex products, it is impossible to
manually track dependencies between the components, but
a tool support for checking consistency must exist.
In the Advant development the components were treated as
separate products even if they were developed within the
enterprise. To have this approach helped when third party
components were used since they all were managed in a
uniform way. Every component contained a file called
import file that included a specification of all component
versions used to build the component. When the final
product was assembled from the components, the import
file has been used for integration and checking if the
consistent sets of the components have been selected. The
development environment, based on make, was set up to
use the import files and the common product structure. All
released components were stored in the product structure
for availability to others. Another structure was used during
development of a component. The component was exported
to the product structure when the development was
finished. Using this approach it was shown that the
architecture design plays a crucial role. A good architecture
with clear and distinguish relations between components
facilitate the development process.
The whole development process is complex and
requires organized and planned support, which is essential
for efficient and successful development of reusable
components and of applications using these.
The Maintenance Process
The maintenance process is also complex, because it
must be handled on different levels: On the system level,
where customers report their problems, on the product
level, where errors detected in a specific product version
are reported, and finally on the component level, where the
fault is located. The modification of the component can
have an impact on other components and other products,
which can lead to an explosion of new versions of different
products which already exist in several versions. To
minimize this cumbersome process, ABB adopted a policy
of avoiding the generation of and supply of specific patches
to selected customers. Instead, revised products
incorporating sets of patches were generated and delivered
to all customers with maintenance contracts, to keep
customer installations consistent.
The relations between components, products and
systems must be carefully registered to make possible the
tracing of errors on all levels. A systematic use of Software
Configuration Management has a crucial role in the
maintenance process.
To support the maintenance process, Advant products
and component specifications together with error reports
are stored in several classes of repositories (see Figure 5).
External Customers
Customer
complaints
Direct Action
Report to Customers
PMR
Direct Action
PMR
Released products
Beta release
PPR
PPR
Development
CR
Figure
5. Different levels of error report management
On the highest level, the repository managing customers
reports (CCRP) makes it possible for service personnel to
provide customers with prompt support. Information saved
on this level is customer and product oriented. Reports
indicating a product problem are registered in the product
maintenance report repository (PMR) where all known
problems related to products and components are filed.
Also, product structure information is stored on this level.
The product structure, showing dependencies between
products and components provides product and component
developers with assistance in relating error reports to the
source of the problem, on both product and component
level. A similar error management process is defined for
products in the beta phase i.e. not yet released. All of the
problems identified in this phase (typically by test groups)
are registered in the form of pre-release problem reports
(PPR). These problems are either solved before the product
is released, or are reclassified as product error reports and
saved in PMR. Any change applied in code or
documentation is under change control, and each change is
initiated by a Change Request. If a change required comes
from an error report, a Change Request will be generated
from a PMR. When a change made in a component is tested
and verified, the action description is exported to the
correlating PMR, propagated to the products involved and
finally returned to the customer via the CCRP repository.
This procedure is not unique to component-based
development. It is a means of managing complex products
and of maintaining many products. What is specific to the
component-based approach is the mapping between
products and components and the management of error
reports on product and component level, the most difficult
part of the management. In this case the entire procedure is
localized on the PMR level, i.e. product level. On the
customer side, information with the highest priority is
related to products and customers. On the development
level, all changes registered are related primarily to
components. Information about both products and
components is stored on the development level. Error
management on this level is the most complex. An error
may be detected in a specific product version, but may also
be present in other products and other product versions.
The error may be discovered in one component, but it can
be present in different versions of that component. The
problem can be solved in one component version, but it
also may be necessary to solve it in several. The revised
component versions are eventually subsequently integrated
in new versions of one or several products. This
multidimensional problem (many error reports, impact on
different versions of components and products, the solution
included in different components and product versions) is
only partially managed automatically, as many steps in the
process require direct human decisions (for example a
decision if a solution to a problem will or will not be
included in the next product release). Although the whole
procedure is carefully designed and rigorously followed, it
has happened on occasions that unexpected changes have
been included, and that changes intended for inclusion were
absent from new product releases. For more details of the
maintenance process see (Kajko-Mattson, 1999a,
Kajko-Mattson, 1999b)).
Another important subject is the maintenance of
external components. It has been shown that external
components must be treated in the same way as internal
components. All known errors and the complete error
management process for internal and external components
are treated in similar way. The list of known, and corrected
errors in external components is important for developers,
product managers and service people. The cost of
maintaining components, even those maintained by others,
must be taken into consideration.
Integrating Standard Components
In recent years the demands of customers on systems
have changed. Customers require integration with standard
technologies and the use of standard applications in the
products they buy. This is a definite trend on the market but
there is little awareness of the possible problems involved.
An improper use of standard components can cause severe
problems, especially in distributed real-time and safety-critical
systems, with long-period guarantees. In addition to
these new requirements, time-to-market demands have
become a very important factor.
These factors and other changes in software and
hardware technology (Aoyama, 1998) have introduced a
new paradigm in the development process. The
development process is focused now on the use of standard
and de-facto standard components, outsourcing, COTS and
the production of components. At the same time, final
products are no longer closed, monolith systems, but are
instead component-based products that can be integrated
with other products available on the market.
This new paradigm in the development process and
marketing strategy has introduced new problems and raised
new questions (McKinney, 1999):
- The development process has been changed. Developers
are now not only designers and programmers, they are
also integrators and marketing investigators. Are the
new development methods established? Are the
developers properly educated?
- What are the criteria for the selection of a component?
How can we guarantee that a standard component
fulfills the product requirements?
- What are the maintenance aspects? Who is responsible
for the maintenance? What can be expected of the
updating and upgrading of components? How can we
satisfy the compatibility and reliability requirements?
- What is the trend on the market? What can we expect to
buy not only today but also on the day we begin
delivering our product?
When developing a component, how can we guarantee
that the "proper" standard is used? Which standard will
be valid in five, ten years?
All these questions must be considered before
beginning a component-based development project.
Josefsson (Josefsson, 1999) presents certain
recommendations to the component integrator for use as
guidelines: Test the imported component in the
environment where it is to run and limit the practical
number of component suppliers to minimize the
compatibility problems. Make sure that the supplier is
evaluated before a long-term agreement is signed.
The focus of development environment support should
be transferred from the edit-build-test cycle to the
component integration-test cycle. Configuration
management must give more consideration to run-time
phase (Larsson and Crnkovic, 1999).
Replacing Internal Components with Standard
Components
In the middle of the eighties, ABB Advant products
were completely proprietary systems with internally
developed hardware, basic and application software. In the
beginning of the nineties, standard hardware components
and software platforms were purchased while the real-time
additions and application software were developed
internally. The system is now developed further using
components based on new, standard technologies.
During this development, further new components
become available on the market. ABB faced this issue more
than once. At one point in time, it was necessary to
abandon the existing solutions in a favor of new solutions
based on existing components and technologies. To
illustrate the migration process we discuss the possibility of
replacing OMF and C++_complib with standard
components.
Experience from these examples showed that it is easier
to replace a component if the replacement process is made
in small incremental steps. Allowing the new component to
coexist with the old one makes it easier to be backward
compatible and the change will be smooth.
Replacing OMF with DCOM
Moving from a UNIX based system to a system based
on Windows NT had serious effect on the system
architecture. Microsoft components using a new object
model were available, namely COM/DCOM (Box, 1998).
DCOM has functionality similar to that of OMF and this
became a new issue when DCOM was released. Should
ABB continue to develop its proprietary OMF or change to
a new standard component? The problem was that DCOM
did not have all the functionality of OMF and vice versa.
The domains overlap only partially.
A subscription of data with various capabilities can be
made in OMF, and this subscription functionality is not
supported by DOCM. On the other hand, DCOM can create
objects when they are required and not like OMF where
objects are created before the actual use of them. Both
technologies support object communication and in this area
it is easier to replace OMF with DCOM.
If the decision was made to continue with OMF, all the
new components that run on top of COM could not be used,
which would drastically reduce the possibilities of
integration with other, third-party components. On the other
hand, it would require considerable work to make the
current system run on top of COM. This was the dilemma
of COM vs. OMF.
To begin with, OMF was adapted to COM with an
adapter designated OMF COM aware. This functionality
helped COM developers access OMF objects and vice
versa. However, this solution to the problem using two
different object models was not optimal since it added
overhead in the communication. Nor was it possible to
match the data types one to one, which made the solution
limited. A decision was taken to build the new system on
COM technologies with proprietary extensions adding the
functions missing from COM. All communication with the
current system was to be through the OMF COM. This
solution made it easy to remove the old OMF and replace it
with COM in small steps over time. Adapters are very
useful when a new component is to used in parallel with an
existing one (Rine et al., 1999). More adapters to other
systems such as Orbix(CORBA) and Fieldbus Foundation
were constructed. If the external systems have similar data
types it is fairly straightforward to build a framework for
adapters where the parts that take care of the proprietary
system can be reused. New systems can be accessed by
adding a server and client stub to the adapter framework.
To be able to build functional adapters between two
middleware components it is important to have the
capability to create remote calls dynamically. For instance
the Dynamic Invocation Interface (DII) in CORBA can be
used. If the middleware does not have this possibility it
might be possible to generate code automatically that takes
care of the different types of calls which are going to be
placed through the adapter to the other system.
Replacing C++_complib with STL
To switch from C++_complib to STL (Austern, 1999)
was much easier because STL covers almost all the
C++_complib functions and provides additional
functionality. Still, much work reminded to be done, since
all the code using C++_complib had to be changed to be
able to use STL instead. The decision was taken to continue
using both components and to use STL whenever new
functionality was added. After a time the use of old
components was reduced and the internal maintenance cost
reduced. In some cases in the same components both
libraries were used, which gave some disadvantages,
especially in the maintenance process.
Managing Evolution of Standard Components
Use of standard components implies less control on
them (Larsson and Crnkovic, 1999, Larsson and Crnkovic,
2000, Cook and Dage, 1999), especially if the components
are updated at run-time. A system of components is usually
configured once only during the build-time when known
and tested versions of components are used. Later, when
the system evolves with new versions of components, the
system itself has no mechanism to detect if new
components have been installed. There might be a check
that the version of replacement component is at least the
same as or newer than the original version. This approach
prevents the system from using old components, but it does
not guarantee its functionality when new components are
installed. Applying ideas from configuration management,
such as version and change management, in managing
components is an approach which can be used to solve
some of the problems.
A certain level of configuration control will be achieved
when it is possible to identify components with their
versions and dependencies to other components.
Information about a system can be placed under version
control for later retrieval. This makes it possible to compare
different baselines of a system configuration. To manage
dependencies, a graphic representation of the configuration
is introduced. The graphs are then placed under version
control. This information can be used to predict which
components will be affected by a replacement or
installation of a new component.
It is generally difficult to identify components during
run-time and to obtain their version information. When the
components are identified it is possible to build graphs of
dependencies, which can be represented in various ways
and placed under configuration control (Larsson, 2000).
To improve the control of external components, they
can be placed under change management to permit the
monitoring of changes and bugs. Instead of attaching
source code files to change requests, which is common in
change management, the name and version of the
component can be used to track changes. When a problem
report is analysed, the outcome can be a change request for
each component involved. Each such change request can
contain a list of all the changed source files or a description
of the patches if the component is external. Patches from
the component vendor must be stored to permit recreation
of the same configuration later. In cases where the high
quality of products must be assured, the enterprise
developing products must have special, well-defined
relations to the component vendors for the support and
maintenance.
5 Conclusion
We have presented the ABB Advant Control Systems
(OCS) as a successful example of the development of a
component-based system. The success of these systems on
the market has been primarily the result of appropriate
functionality and quality. Success in development,
maintenance and continued improvement of the systems
has been achieved by a careful architecture design, where
the main principle is the reuse of components. The reuse
orientation provides many advantages, but it also requires
systematic approach in design planning, extensive
development, support of a more complex maintenance
process, and in general more consideration being given to
components. It is not certain that an otherwise successful
development organization can succeed in the development
of reusable components or products based on reusable
components. The more a reusable component is developed,
the more complex is the development process, and more
support is required from the organization.
Even when all these requirements are satisfied, it can
happen that there are unpredictable extra costs. One
example illustrate this: In the early stage of the ABB
Advant OCS development, insufficient consideration was
given to Windows NT and ABB had to pay the price for
this oversight when it suddenly became clear that Windows
NT would be the next operating platform. The new product
versions on the new platform have been developed by
porting the software from the old platform, but the costs
were significantly greater than if the design had been done
more independent from the first platform.
Another problem we have addressed, is the question of
moving to new technologies which require the re-creation
of the components or the inclusion of standard components
available on the market. In both cases it can be difficult to
keep or achieve the same functionality as the original
components had. However, it seems that the process of
replacing proprietary components by standard components
available from third parties is inevitable and then it is
important to have a proper strategy for migrating from old
components to the new ones.
6
--R
New Age of Software Development: How component-based Software Engineering Changes the way of Software Development
Generic Programming and STL
Third Product Line Practice Report
Experience with Change-oriented SCM Tools
Programvarukomponenter i praktiken
Maintenance at ABB (I): Software Problem Administration Processes
Maintenance at ABB (II): Change execution processes
Applying Configuration Management Techniques to Component-Based Systems Licentiate Thesis Dissertation 2000-007
New Challenges for Configuration Management
Component Configuration Management
Impact of Commercial Off-The-Shelf (COTS) Software on the Interface Between systems and Software Engineering
Using Adapters to Reduce Interaction Complexity in reusable Component-Based Software Development
Software Engineering
--TR
RCSMYAMPERSANDmdash;a system for version control
Component software
Highly reliable upgrading of components
Product-line architectures in industry
Impact of commercial off-the-shelf (COTS) software on the interface between systems and software engineering
Using adapters to reduce interaction complexity in reusable component-based software development
Large-Scale, Component Based Development
Software Engineering
Experience with Change-Oriented SCM Tools
Change Measurements in an SCM Process
New Challenges for Configuration Management
Maintenance at ABB (I)
Maintenance at ABB (II)
--CTR
Ade Azurat, Mechanization of invasive software composition in F-logic, Proceedings of the 2007 annual Conference on International Conference on Computer Engineering and Applications, p.89-94, January 17-19, 2007, Gold Coast, Queensland, Australia
Lin Chin-Feng , Tsai Hsien-Tang , Fu Chen-Su, A logic deduction of expanded means-end chains, Journal of Information Science, v.32 n.1, p.5-16, February 2006 | architecture;commercial components;component-based development;development environment;reuse |
605725 | Data parallel language and compiler support for data intensive applications. | Processing and analyzing large volumes of data plays an increasingly important role in many domains of scientific research. High-level language and compiler support for developing applications that analyze and process such datasets has, however, been lacking so far.In this paper, we present a set of language extensions and a prototype compiler for supporting high-level object-oriented programming of data intensive reduction operations over multidimensional data. We have chosen a dialect of Java with data-parallel extensions for specifying a collection of objects, a parallel for loop, and reduction variables as our source high-level language. Our compiler analyzes parallel loops and optimizes the processing of datasets through the use of an existing run-time system, called active data repository (ADR). We show how loop fission followed by interprocedural static program slicing can be used by the compiler to extract required information for the run-time system. We present the design of a compiler/run-time interface which allows the compiler to effectively utilize the existing run-time system.A prototype compiler incorporating these techniques has been developed using the Titanium front-end from Berkeley. We have evaluated this compiler by comparing the performance of compiler generated code with hand customized ADR code for three templates, from the areas of digital microscopy and scientific simulations. Our experimental results show that the performance of compiler generated versions is, on the average 21% lower, and in all cases within a factor of two, of the performance of hand coded versions. | Introduction
Analysis and processing of very large multi-dimensional scientic datasets (i.e.
where data items are associated with points in a multidimensional attribute
space) is an important component of science and engineering. Examples of
these datasets include raw and processed sensor data from satellites [27], output
from hydrodynamics and chemical transport simulations [23], and archives
of medical images[1]. These datasets are also very large, for example, in medical
imaging, the size of a single digitized composite slide image at high power
from a light microscope is over 7GB (uncompressed), and a single large hospital
can process thousands of slides per day.
Applications that make use of multidimensional datasets are becoming increasingly
important and share several important characteristics. Both the input
and the output are often disk-resident. Applications may use only a subset
of all the data available in the datasets. Access to data items is described
by a range query, namely a multidimensional bounding box in the underlying
multidimensional attribute space of the dataset. Only the data items whose
associated coordinates fall within the multidimensional box are retrieved. The
processing structures of these applications also share common characteristics.
However, no high-level language support currently exists for developing applications
that process such datasets.
In this paper, we present our solution towards allowing high-level, yet ecient
programming of data intensive reduction operations on multidimensional
datasets. Our approach is to use a data parallel language to specify computations
that are to be applied to a portion of disk-resident datasets. Our solution
is based upon designing a prototype compiler using the titanium infrastructure
which incorporates loop ssion and slicing based techniques, and utilizing
an existing run-time system called Active Data Repository [8, 9, 10].
We have chosen a dialect of Java for expressing this class of computations. We
have chosen Java because the computations we target can be easily expressed
using the notion of objects and methods on objects, and a number of projects
are already underway for expressing parallel computations in Java and obtaining
good performance on scientic applications [4, 25, 36]. Our chosen dialect
of Java includes data-parallel extensions for specifying collection of objects,
a parallel for loop, and reduction variables. However, the approach and the
techniques developed are not intended to be language specic. Our overall
thesis is that a data-parallel framework will provide a convenient interface to
large multidimensional datasets resident on a persistent storage.
This research was supported by NSF Grant ACR-9982087, NSF Grant CCR-
9808522, and NSF CAREER award ACI-9733520.
Conceptually, our compiler design has two major new ideas. First, we have
shown how loop ssion followed by interprocedural program slicing can be
used for extracting important information from general object-oriented data-parallel
loops. This technique can be used by other compilers that use a run-time
system to optimize for locality or communication. Second, we have shown
how the compiler and the run-time system can use such information to eciently
execute data intensive reduction computations.
Our compiler extensively uses the existing run-time system ADR for optimizing
the resource usage during execution of data intensive applications. ADR
integrates storage, retrieval and processing of multidimensional datasets on a
parallel machine. While a number of applications have been developed using
ADR's low-level API and high performance has been demonstrated [9], developing
applications in this style requires detailed knowledge of the design of
ADR and is not suitable for application programmers. In comparison, our proposed
data-parallel extensions to Java enable programming of data intensive
applications at a much higher level. It is now the responsibility of the compiler
to utilize the services of ADR for memory management, data retrieval
and scheduling of processes.
Our prototype compiler has been implemented using the titanium infrastructure
from Berkeley [36]. We have performed experiments using three dierent
data intensive application templates, two of which are based upon the Virtual
Microscope application [16] and the third is based on water contamination
studies [23]. For each of these templates, we have compared the performance of
compiler generated versions with hand customized versions. Our experiments
show that the performance of compiler generated versions is, on average, 21%
lower and in all cases within a factor of two of the performance of hand coded
versions. We present an analysis of the factors behind the lower performance
of the current compiler and suggest optimizations that can be performed by
our compiler in the future.
The rest of the paper is organized as follows. In Section 2, we further describe
the charactestics of the class of data intensive applications we target.
Background information on the run-time system is provided in Section 3.
Our chosen language extensions are described in Section 4. We present our
compiler processing of the loops and slicing based analysis in Section 5. The
combined compiler and run-time processing for execution of loops is presented
in Section 6. Experimental results from our current prototype are presented
in Section 7. We compare our work with existing related research eorts in
Section 8 and conclude in Section 9.
2 Data Intensive Applications
In this section, we rst describe some of the scientic domains which involve
applications that process large datasets. Then, we describe some of the common
characteristics of the applications we target.
Data intensive applications from three scientic areas are being studied currently
as part of our project.
Analysis of Microscopy Data: The Virtual Microscope [16] is an application
to support the need to interactively view and process digitized data
arising from tissue specimens. The Virtual Microscope provides a realistic
digital emulation of a high power light microscope. The raw data for such a
system can be captured by digitally scanning collections of full microscope
slides under high power. At the basic level, it can emulate the usual behavior
of a physical microscope including continuously moving the stage and changing
magnication and focus. Used in this manner, the Virtual Microscope can
support completely digital dynamic telepathology.
Water contamination studies: Environmental scientists study the water
quality of bays and estuaries using long running hydrodynamics and chemical
transport simulations [23]. The chemical transport simulation models reactions
and transport of contaminants, using the
uid velocity data generated by the
hydrodynamics simulation. The chemical transport simulation is performed
on a dierent spatial grid than the hydrodynamics simulation, and also often
uses signicantly coarser time steps. To facilitate coupling between these two
simulation, there is a need for mapping the
uid velocity information from
the hydrodynamics grid, averaged over multiple ne-grain time steps, to the
chemical transport grid and computing smoothed
uid velocities for the points
in the chemical transport grid.
Satellite data processing: Earth scientists study the earth by processing
remotely-sensed data continuously acquired from satellite-based sensors, since
a signicant amount of Earth science research is devoted to developing correlations
between sensor radiometry and various properties of the surface of the
Earth [9]. A typical analysis processes satellite data for ten days to a year and
generates one or more composite images of the area under study. Generating a
composite image requires projection of the globe onto a two dimensional grid;
each pixel in the composite image is computed by selecting the \best" sensor
value that maps to the associated grid point.
Data intensive applications in these and related scientic areas share many
common characteristics. Access to data items is described by a range query,
namely a multidimensional bounding box in the underlying multidimensional
space of the dataset. Only the data items whose associated coordinates fall
within the multidimensional box are retrieved. The basic computation consists
of (1) mapping the coordinates of the retrieved input items to the corresponding
output items, and (2) aggregating, in some way, all the retrieved input
items mapped to the same output data items. The computation of a particular
output element is a reduction operation, i.e. the correctness of the output
usually does not depend on the order in which the input data items are aggregated
Another common characteristic of these applications is their extremely high
storage and computational requirements. For example, ten years of global
coverage satellite data at a resolution of four kilometers for our satellite data
processing application Titan consists of over 1.4TB of data [9]. For our Virtual
Microscope application, one focal plane of a single slide requires over 7GB
(uncompressed) at high power, and a hospital such as Johns Hopkins produces
hundreds of thousands of slides per year. Similarly, the computation for one
ten day composite Titan query for the entire world takes about 100 seconds
per processor on the Maryland sixteen node IBM SP2. The application scientists
typically demand real-time responses to such queries, therefore, e-cient
execution is extremely important.
3 Overview of the Run-time System
Our compiler eort targets an existing run-time infrastructure, called the Active
Data Repository (ADR) [9] that integrates storage, retrieval and processing
of multidimensional datasets on a parallel machine. We give a brief
overview of this run-time system in this section.
Processing of a data intensive data-parallel loop is carried out by ADR in two
phases: loop planning and loop execution. The objective of loop planning is to
determine a schedule to e-ciently process a range query based on the amount
of available resources in the parallel machine. A loop plan species how parts
of the nal output are computed. The loop execution service manages all the
resources in the system and carries out the loop plan generated by the loop
planning service. The primary feature of the loop execution service is its ability
to integrate data retrieval and processing for a wide variety of applications.
This is achieved by pushing processing operations into the storage manager
and allowing processing operations to access the buer used to hold data
arriving from the disk. As a result, the system avoids one or more levels of
copying that would be needed in a layered architecture where the storage
manager and the processing belong in dierent layers.
A dataset in ADR is partitioned into a set of (logical) disk blocks to achieve
high bandwidth data retrieval. The size of a logical disk block is a multiple
of the size of a physical disk block on the system and is chosen as a trade-o
between reducing disk seek time and minimizing unnecessary data transfers.
A disk block consists of one or more objects, and is the unit of I/O and
communication. The processing of a loop on a processor progresses through
the following three phases: (1) Initialization { output disk blocks (possibly
replicated on all processors) are allocated space in memory and initialized,
(2) Local Reduction { input disk blocks on the local disks of each processor are
retrieved and aggregated into the output disk blocks, and (3) Global Combine
{ if necessary, results computed in each processor in phase 2 are combined
across all processors to compute nal results for the output disk blocks.
ADR run-time support has been developed as a set of modular services implemented
in C++. ADR allows customization for application specic processing
(i.e., mapping and aggregation functions), while leveraging the commonalities
between the applications to provide support for common operations such as
memory management, data retrieval, and scheduling of processing across a
parallel machine. Customization in ADR is currently achieved through class
inheritance. That is, for each of the customizable services, ADR provides a
base class with virtual functions that are expected to be implemented by derived
classes. Adding an application-specic entry into a modular service requires
the denition of a class derived from an ADR base class for that service
and providing the appropriate implementations of the virtual functions. Current
examples of data intensive applications implemented with ADR include
Titan [9], for satellite data processing, the Virtual Microscope [16], for visualization
and analysis of microscopy data, and coupling of multiple simulations
for water contamination studies [23].
4 Java Extensions for Data Intensive Computing
In this section, we describe a dialect of Java that we have chosen for expressing
data intensive computations. Though we propose to use a dialect of Java as the
source language for the compiler, the techniques we will be developing will be
largely independent of Java and will also be applicable to suitable extensions
of other languages, such as C, C++, or Fortran 90.
4.1 Data-Parallel Constructs
We borrow two concepts from object-oriented parallel systems like Titanium [36],
HPC++ [5], and Concurrent Aggregates [11].
Interface Reducinterface f
*Any object of any class implementing
this interface is a reduction variable*
public class VMPixel f
char colors[3];
void Initialize() f
*Aggregation Function*
void Accum(VMPixel Apixel, int avgf) f
public class VMPixelOut extends VMPixel
implements Reducinterface;
public class VMScope f
static int
static int Ydimen = . ;
static
*Data Declarations*
static
static
static new VMPixel[VMSlide];
public static void main(String[] args) f
int
lowend)/subsamp];
foreach(p in Outputdomain) f
*Main Computational Loop*
foreach(p in querybox) f
Fig. 1. Example Code
Domains and Rectdomains are collections of objects of the same type. Rect-
domains have a stricter denition, in the sense that each object belonging
to such a collection has a coordinate associated with it that belongs to a
pre-specied rectilinear section of the domain.
The foreach loop, which iterates over objects in a domain or rectdomain, and
has the property that the order of iterations does not in
uence the result
of the associated computations. Further, the iterations can be performed in
parallel. We also extend the semantics of foreach to include the possibility
of updates to reduction variables, as we explain later.
We introduce a Java interface called Reducinterface. Any object of any class
implementing this interface acts as a reduction variable [18]. The semantics of a
reduction variable is analogous to that used in version 2.0 of High Performance
Fortran (HPF-2) [18] and in HPC++ [5]. A reduction variable has the property
that it can only be updated inside a foreach loop by a series of operations that
are associative and commutative. Furthermore, the intermediate value of the
reduction variable may not be used within the loop, except for self-updates.
4.2 Example Code
Figure
1 outlines an example code with our chosen extensions. This code
shows the essential computation in the virtual microscope application [16].
A large digital image is stored in disks. This image can be thought of as a
two dimensional array or collection of objects. Each element in this collection
denotes a pixel in the image. Each pixel is comprised of three characters, which
denote the color at that point in the image. The interactive user supplies two
important pieces of information. The rst is a bounding box within this two
dimensional box, which implies the area within the original image that the user
is interested in scanning. We assume that the bounding box is rectangular,
and can be specied by providing the x and y coordinates of two points. The
rst 4 arguments provided by the user are integers and together, they specify
the points lowend and hiend. The second information provided by the user
is the subsampling factor, an integer denoted by subsamp. The subsampling
factor tells the granularity at which the user is interested in viewing the image.
A subsampling factor of 1 means that all pixels of the original image must be
displayed. A subsampling factor of n means that n 2 pixels are averaged to
compute each output pixel.
The computation in this kernel is very simple. First, a querybox is created
using specied points lowend and hiend. Each pixel in the original image
which falls within the querybox is read and then used to increment the value
of the corresponding output pixel.
There are several advantages associated with specifying the analysis and processing
over multidimensional datasets in this fashion. The programs can specify
the computations assuming a single processor and
at memory. It also assumes
that the data is available in arrays of object references, and is not in
persistent storage. It is the responsibility of the compiler and run-time system
to locate individual elements of the arrays from disks. Also, it is the responsibility
of the compiler to invoke the run-time system for optimizing resource
usage.
4.3 Restrictions on the Loops
The primary goal of our compiler will be to analyze and optimize (by performing
both compile-time transformations and generating code for ADR run-time
system) foreach loops that satisfy certain properties. We assume standard semantics
of parallel for loops and reductions in languages like High Performance
Fortran (HPF) [18] and HPC++ [5]. Further, we require that no Java
threads be spawned within such loop nests, and no memory locations read or
written to inside the loop nests may be touched by another concurrent thread.
Our compiler will also assume that no Java exceptions are raised in the loop
nests and the iterations of the loop can be reordered without changing any
of the language semantics. One potential way of enabling this can be to use
bound checking optimizations [25].
5 Compiler Analysis
In this section, we rst describe how the compiler processes the given data-parallel
data intensive loop to a canonical form. We then describe how inter-procedural
program slicing can be used for extracting a number of functions
which are passed to the run-time system.
5.1 Initial Processing of the Loop
Consider any data-parallel loop in our dialect of Java, as presented in Section 4.
The memory locations modied in this loop are only the elements of collection
of objects, or temporary variables whose values are not used in other iterations
of the loop or any subsequent computations. The memory locations accessed
in this loop are either elements of collections or values which may be replicated
on all processors before the start of the execution of the loop.
For the purpose of our discussion, collections of objects whose elements are
modied in the loop are referred to as left hand side or lhs collections, and
the collections whose elements are only read in the loop are considered as right
hand side or rhs collections.
The functions used to access elements of collections of objects in the loop are
referred to as subscript functions.
Denition 1 Consider any two lhs collections or any two rhs collections.
These two collections are called congruent i
The subscript functions used to access these two collections in the loop are
identical.
The layout and partitioning of these two collections are identical. By identical
layout we mean that elements with the identical indices are put together
in the same disk block for both the collections. By identical partitioning we
mean that the disk blocks containing elements with identical indices from
these collections reside on the same processor.
Consider any loop. If multiple distinct subscript functions are used to access
rhs collections and lhs collections and these subscript functions are not
known at compile-time, tiling output and managing disk accesses while maintaining
high reuse and locality is going to be a very di-cult task for the
run-time system. In particular, the current implementation of ADR does not
support such cases. Therefore, we perform loop ssion to divide the original
loop into a set of loops, such that all lhs collections in any new loop are
congruent and all rhs collections are congruent. We now describe how such
loop ssion is performed.
Initially, we focus on lhs collections which are updated in dierent statements
of the same loop. We perform loop ssion, so that all lhs collections accessed
in any new loop are congruent. Since we are focusing on loops with no loop-carried
dependencies, performing loop ssion is straight-forward. An example
of such transformation is shown in Figure 2, part (a).
We now focus on such a new loop in which all lhs collections are congruent,
but not all rhs collections may be congruent. For any two rhs accesses in a
loop that are not congruent, there are three possibilities:
1. These two collections are used for calculating values of elements of dierent
lhs collections. In this case, loop ssion can be performed trivially.
2. These two collections Y and Z are used for calculating values of elements
of the same lhs collection. Such lhs collection X is, however, computed as
follows:
such that, op i op j . In such a case, loop ssion can be performed, so that
the element X(f(i)) is updated using the operation op i with the values of
Y (g(i)) and Z(h(i)) in dierent loops. An example of such transformation is
shown in Figure 2, part (b).
3. These two collections Y and Z are used for calculating values of the elements
of the same lhs collection and unlike the case above, the operations used are
not identical. An example of such a case is
In this case, we need to introduce temporary collection of objects to copy
the collection Z. Then, the collection Y and the temporary collection can be
accessed using the same subscript function. An example of such transformation
is shown in Figure 2, part (c).
After such a series of loop ssion transformations, the original loop is replaced
by a series of loops. The property of each loop is that all lhs collections are
accessed with the same subscript function and all rhs collections are also
accessed with the same subscript function. However, the subscript function
for accessing the lhs collections may be dierent from the one used to access
rhs collections.
(a) foreach (p in box) f
foreach (p in box) f
foreach (p in box) f
(b) foreach (p in box) f
foreach (p in box) f
foreach (p in box) f
(c) foreach (p in box) f
foreach (p in box) f
foreach (p in box) f
Fig. 2. Examples of Loop Fission Transformations
O 1 [S L (r)] op
Om [S L (r)] op
Fig. 3. A Loop In Canonical Form
5.1.1 Discussion
Our strategy of performing loop ssion so that all lhs collections are accessed
with the same subscript function and all rhs collections are accessed with
the same subscript function is clearly not the best suited for all classes of
applications. Particularly, for stencil computations, it may result in several
accesses to each disk block. However, for the class of data intensive reductions
we have focused on, this strategy works extremely well and simplies the later
loop execution. In the future, we will incorporate some of the techniques from
parallel database join operations for loop execution, which will alleviate the
need for performing loop ssion in all cases.
5.1.2 Terminology
After loop ssion, we focus on one individual loop at a time. We introduce
some notation about this loop which is used for presenting our solution. The
terminology presented here is illustrated by the example loop in Figure 3.
The domain over which the iterator iterates is denoted by R. Let there be n
rhs collection of objects read in this loop, which are denoted by I
Similarly, let the lhs collections written in the loop be denoted by O
Further, we denote the subscript function used for accessing right hand side
collections by SR and the subscript function used for accessing left hand side
collections by S L .
int
Fig. 4. Slice for Subscript Function (left) and for Aggregation Function (right)
Given a point r in the range for the loop, elements S L (r) of the output collections
are updated using one or more of the values I 1
and other scalar values in the program. We denote by A i the function used for
creating the value which is used later for updating the element of the output
collection O i . The operator used for performing this update is op i .
5.2 Slicing Based Interprocedural Analysis
We are primarily concerned with extracting three sets of functions, the range
function R, the subscript functions SR and S L , and the aggregation functions
Similar information is often extracted by various data-parallel
Fortran compilers. One important dierence is that we are working with an
object-oriented language (Java), which is signicantly more di-cult to ana-
lyze. This is mainly because the object-oriented programming methodology
frequently leads to small procedures and frequent procedure calls. As a result,
analysis across multiple procedures may be required in order to extract range,
subscript and aggregation functions.
We use the technique of interprocedural program slicing for extracting these
three sets of functions. Initially, we give background information on program
slicing and give references to show that program slicing can be performed
across procedure boundaries, and in the presence of language features like
polymorphism, aliases, and exceptions.
5.2.1 Background: Program Slicing
The basic denition of a program slice is as follows. Given a slicing criterion
(s; x), where s is a program point in the program and x is a variable,
the program slice is a subset of statements in the programs such that these
statements, when executed on any input, will produce the same value of the
variable x at the program point s as the original program.
The basic idea behind any algorithm for computing program slices is as follows.
Starting from the statement p in the program, we trace any statements on
which p is data or control dependent and add them to the slice. The same is
repeated for any statement which has already been included in the slice, until
no more statements can be added in the slice.
ADR Pt outpoint(2);
ADR Pt lowend(2);
int
return outpoint ;
void Accumulate(ADR Box current block, ADR Box
current tile, ADR Box querybox) f
current block.intersect(querybox);
ADR Pt inputpt(2);
ADR Pt outputpt(2);
int
for
for
if (project(inputpt, outputpt, current tile)) f
Output[outputpt].Accum(VScope[inputpt],
Fig. 5. Compiler Generated Subscript and Aggregation Functions
Slicing has been very frequently used in software development environments,
for debugging, program analysis, merging two dierent versions of the code,
and software maintenance and testing. A number of techniques have been
presented for accurate program slicing across procedure boundaries [32]. Since
object-oriented languages have been frequently used for developing production
level software, signicant attention has been paid towards developing slicing
techniques in the presence of object-oriented features like object references,
polymorphism, and more recently, Java features like threads and exceptions.
Harrold et al. and Tonnela et al. have particularly focused on slicing in the
presence of polymorphism, object references, and exceptions [17, 28]. Slicing
in the presence of aliases and reference types has also been addressed [3].
5.2.2 Extracting Range Function
We need to determine the rhs and lhs collection of objects for this loop. We
also need to provide the range function R.
The rhs and lhs collection of objects can be computed easily by inspecting
the assignment statements inside the loop and in any functions called inside
the loop. Any collection which is modied in the loop is considered a lhs
collection, and any other collection touched in the loop is considered a rhs
collection.
For computing the domain, we inspect the foreach loop and look at the domain
over which the loop iterates. Then, we compute a slice of the program using
the entry of the loop as the program point and the domain as the variable.
5.2.3 Extracting Subscript Functions
The subscript functions SR and S L are particularly important for the run-time
system, as it determines the size of lhs collections written in the loop and the
rhs disk blocks from each collection that contributes to the lhs collections.
The function S L can be extracted using slicing as follows. Consider any statement
in the loop which modies any lhs collection. We focus on the variable
or expression used to access an element in the lhs collection. The slicing criterion
we choose is the value of this variable or expression at the beginning of
the statement where the lhs collection is modied.
The function SR can be extracted similarly. Consider any statement in the
loop which reads from any rhs collection. The slicing criterion we use is the
value of the expression used to access the collection at the beginning of such
a statement.
Typically, the value of the iterator will be included in such slices. Suppose the
iterator is p. After rst encountering p in the slice, we do not follow data-dependencies
for p any further. Instead, the functions returned by the slice
use such iterator as the input parameter.
For the virtual microscope template presented in Figure 1, the slice computed
for the subscript function S L is shown at the left hand side of Figure 4 and the
code generated by the compiler is shown on the left hand side of Figure 5. In
the original source code, the rhs collection is accessed with just the iterator
therefore, the subscript function SR is the identity function. The function
receives the coordinates of an element in the rhs collection as parameter
(iterpt) from the run-time system and returns the coordinates of the corresponding
lhs element. Titanium multidimensional points are supported by
ADR as a class named ADR Pt. Also, in practice, the command line parameters
passed to the program are extracted and stored in a data-structure, so that
the run-time system does not need to explicitly read args array.
5.2.4 Extracting Aggregation Functions
For extracting the aggregation function A i , we look at the statement in the
loop where the lhs collection O i is modied. The slicing criterion we choose is
the value of the element from the collection which is modied in this statement,
at the beginning of this statement.
The virtual microscope template presented in Figure 1 has only one aggregation
function. The slice for this aggregation function is shown in Figure 4 and
the actual code generated by the compiler is shown in Figure 5. The function
Accum accessed in this code is obviously part of the slice, but is not shown here.
The generated function iterates over the elements of a disk block and applies
aggregation functions on each element, if that element intersects with the range
of the loop and the current tile. The function is presented as a parameter of
current block (the disk block being processed), the current tile (the portion
of lhs collection which is currently allocated in memory), and querybox
which is the iteration range for the loop. Titanium rectangular domains are
supported by the run-time as ADR Box. Further details of this aggregation
function are explained after presenting the combined compiler/run-time loop
processing.
6 Combined Compiler and Run-time Processing
In this section we explain how the compiler and run-time system can work
jointly towards performing data intensive computations.
6.1 Initial Processing of the Input
The system stores information about how each of the rhs collections of objects
I i is stored across disks. Note that after we apply loop ssion, all rhs
collections accessed in the same loop have identical layout and partitioning.
The compiler generates appropriate ADR functions to analyze the meta-data
about collections I i , the range function R, and the subscript function SR , and
compute the list of disk blocks of I i that are accessed in the loop. The domain
of each rhs collection accessed in the loop is SR R. Note that if a disk block
is included in this list, it is not necessary that all elements in this disk block
are accessed during the loop. However, for the initial planning phase, we focus
on the list of disk blocks.
We assume a model of parallel data intensive computation in which a set of
disks is associated with each node of the parallel machine. This is consistent
with systems like IBM SP and cluster of workstations. Let the set
denote the list of processors in the system. Then, the information computed
by the run-time system after analyzing the range function, the input subscript
function and the meta-data about each of the collections of objects I i is the
. For a given input collection I i and a processor is the set of disk
blocks b that contain data for collection I i , is resident on a disk connected to
processor intersects with SR R.
Further, for each disk block b ijk belonging to the set B ij , we compute the
information D(b ijk ), which denotes the subset of the domain SR R which is
resident on the disk block b. Clearly the union of the domains covered by all
selected disk blocks will cover the entire area of interest, or in formal terms,
6.2 Work Partitioning
One of the issues in processing any loop in parallel is work or iteration parti-
tioning, i.e., deciding which iterations are executed on which processor.
The work distribution policy we use is that each iteration is performed on the
owner of the element read in that iteration. This policy is opposite to the owner
computes policy [19] which has been commonly used in distributed memory
compilers, in which the owner of the lhs element works on the iteration. The
rationale behind the approach is that the system will not have to communicate
blocks of the rhs collections. Instead, only replicated elements of the lhs
collections need to be communicated to complete the computation. Note that
the assumptions on the nature of loops we have placed requires replacing an
initial loop by a sequence of canonical loops, which may also increase net
communication between processors. However, we do not consider it to be a
problem for the set of applications we target.
6.3 Allocating Output Buers and Strip Mining
The distribution of rhs collections is decided before performing the processing,
and we have decided to partition the iterations accordingly. We now need to
allocate buers to accumulate the local contributions to the nal lhs objects.
We use run-time analysis to determine the elements of output collections which
are updated by the iterations performed on a given processor. This run-time
analysis is similar to the one performed by run-time libraries for executing
irregular applications on distributed memory machines. Any element which is
updated by more than one processor is initially replicated on all processors
by which it is updated. Several dierent strategies for allocation of buers
have been studies in the context of the run-time system [9]. Selecting among
these dierent strategies for the compiler generated code is a topic for future
research.
The memory requirements of the replicated output space are typically higher
than the available memory on each processor. Therefore, we need to divide
the replicated output buer into chunks that can be allocated on the main
memory of each processor. This is the same issue as strip mining or tiling used
for improving cache performance. We have so far used only a very simple strip
mining strategy. We query the run-time system to determine the available
memory that can be allocated on a given processor. Then, we divide the lhs
space into blocks of that size. Formally, we divide the lhs domain S L R
into a set of smaller domains (called strips) fS 1 g. Since each of the
lhs collection of objects in the loop is accessed through the same subscript
function, the same strip mining is done for each of them.
In performing the computations on each processor, we will iterate over the set
of strips, allocate that strip for each of the n output collections, and compute
local contributions to each strip, before allocating the next strip. To facilitate
this, we compute the set of rhs disk blocks that will contribute to each strip
of the lhs.
6.4 Mapping Input to the Output
We use subscript functions SR and S L for computing the set of rhs disk blocks
that will contribute to each strip of the lhs as indicated above. To do this
we apply the function S L (S 1
R ) to each D(b ijk ) to obtain the corresponding
domain in the lhs region. These output domains that each disk block can
contribute towards are denoted as OD(b ijk ). If D(b ijk ) is a rectangular domain
and if the subscript functions are monotonic, OD(b ijk ) will be a rectangular
domain and can easily be computed by applying the subscript function to the
two extreme corners. If this is not the case, the subscript function needs to be
applied on each element of D(b ijk ) and the resulting OD(b ijk ) will just be a
domain and not a rectangular domain. Formally, we compute the sets L jl , for
each processor j and each output strip l, such that
6.5 Actual Execution
The computation of sets L il marks the end of the loop planning phase of the
run-time system. Using this information, now the actual computations are
performed on each processor. The structure of the computation is shown in
Figure
6. In practice, the computation associated with each rhs disk block and
retrieval of disk blocks is overlapped, by using asynchronous I/O operations.
We now explain the aggregation function generated by the compiler for the
virtual microscope template presented in Figure 1, shown in Figure 5 on the
right hand side. The accumulation function output by the compiler captures
the Foreach element part of the loop execution model shown in Figure 6. The
For each output strip S l :
Execute on each Processor
Allocate and initialize strip S l for O
Foreach
Read blocks b ijk disks
Foreach element e of D(b ijk )
If the output pt. intersects with S l
Evaluate functions A
Global reduction to nalize the values for S l
Fig. 6. Loop Execution on Each Processor
run-time system computes the sets L jl as explained previously and invokes the
aggregation function in a loop that iterates over each disk block in such a set.
The current compiler generated code computes the rectangular domain D(b ijk )
in each invocation of the aggregation function, by doing an intersection of the
current block and query block. The resulting rectangular domain is denoted
by box.
The aggregation function iterates over the elements of box. The conditional if
project() achieves two goals. First, it applies subscript functions to determine
the lhs element outputpt corresponding to the rhs element inputpt. Second,
it checks if outputpt belongs to current tile. The actual aggregation is
applied only if outputpt belongs to current tile. This test represents a
signicant source of ine-ciency in the compiler generated code. If the tile
or strip being currently processed represents a rectangular rhs region and
the subscript functions are monotonic, then the intersection of the box and
the tile can be performed before the inner loop. This is in fact done in the
hand customization of ADR for virtual microscope [16]. Performing such an
optimization automatically is a topic for future research and beyond the scope
of our current work.
7 Current Implementation and Experimental Results
In this section, we describe some of the features of the current compiler and
then present experimental results comparing the performance of compiler generated
customization for three codes with hand customized versions.
Our compiler has been implemented using the publicly available Titanium
infrastructure from Berkeley [36]. Our current compiler and run-time system
only implement a subset of the analysis and processing techniques described
in this paper. Two key limitations are as follows. We can only process codes
in which the rhs subscript function is the identity function. It also requires
that the domain over which the loop iterates is a rectangular domain and
all subscript functions are monotonic. Titanium language is an explicitly parallel
dialect of Java for numerical computing. We could use the Titanium
front-end without performing any modications. Titanium language includes
Point, RectDomain, and foreach loop which we required for our purposes.
The concept of reducinterface is not part of Titanium language, but no
modications to the parser were required for this purpose. Titanium also includes
a large number of additional directives which we did not require, and
has signicantly dierent semantics for foreach loops.
We have used three templates for our experiments.
VMScope1 is identical to the code presented in Figure 1. It models a virtual
microscope, which provides a realistic digital emulation of a microscope,
by allowing the users to specify a rectangular window and a subsampling
factor. The version VMScope1 averages the colors of neighboring pixels to
create a pixel in the output image.
VMScope2 is similar to VMScope1, except for one important dierence. Instead
of taking the average of the pixels, it only picks every subsamp th
element along each dimension to create the nal output image. Thus, only
memory accesses and copying is involved in this template, no computations
are performed.
Bess models computations associated with water contamination studies over
bays and estuaries. The computation performed in this application determines
the transport of contaminants, and accesses large
uid velocity data-sets
generated by a previous simulation.
Virtual Microscope (averaging)103050701 2 4 8
# of processors
Execution
Time
Compiled
Original
Fig. 7. Comparison of Compiler and Hand Generated Versions for VMScope1
These three templates represent data intensive applications from two important
domains, digital microscopy and scientic simulations. The computations
and data accesses associated with these computations are relatively simple and
can be handled by our current prototype compiler. Moreover, we had access to
hand coded ADR customization for each of these three templates. This allowed
us to compare the performance of compiler generated versions against hand
coded versions whose performance had been reported in previously published
work [16, 23].
Our experiments were performed using the ADR run-time system ported on a
cluster of dual processor 400 MHz Intel Pentium nodes connected by gigabit
ethernet. Each node has 256MB main memory and GB of internal disk.
Experiments were performed using 1, 2, 4 and 8 nodes of the cluster. ADR run-time
system's current design assumes a shared nothing architecture and does
not exploit multiple CPUs on the same node. Therefore, only one processor
on each node was used for our experiments.
The results comparing the performance of compiler generated and hand customized
VMScope1 are shown in Figure 7. A microscope image of 19; 760
15; 360 pixels was used. Since each pixel in this application takes 3 bytes, a
total of 910 MB are required for storage of such an image. A query with a
bounding box of size 10; 00010; 000 with a subsampling factor of 8 was used.
The time taken by the compiler generated version ranged from 73 seconds on
1 processor to 13 seconds on 8 processors. The speedup on 2, 4, and 8 processors
was 1.86, 3.32, and 5.46, respectively. The time taken by the hand coded
version ranged from 68 seconds on 1 processor to 8.3 seconds on 8 processors.
The speedup on 2, 4, and 8 processors was 2.03, 4.09, and 8.2, respectively.
Since the code is largely I/O and memory bound, slightly better than linear
speedup is possible. The performance of compiler generated code was lower
by 7%, 10%, 25%, and 38% on 1, 2, 4, and 8 processors respectively.
From this data, we see that the performance of compiler generated code is
very close to the hand coded one on the 1 processor case, but is substantially
lower on the 8 processor case. We carefully compared the compiler generated
and hand coded versions to understand these performance factors. The two
codes use dierent tiling strategies of the lhs collections. In the hand coded
version, an irregular strategy is used which ensures that each input disk block
maps entirely into a single tile. In the compiler version, a simple regular tiling
is used, in which each input disk block can map to multiple tiles. As shown
in
Figure
5, the compiler generated code performs an additional check in each
iteration, to determine if the lhs element intersects with the tile currently
being processed. In comparison, the tiling strategy used for the hand coded
version ensures that this check always returns true, and therefore does not
need to be inserted in the code. But, because of the irregular tiling strategy,
an irregular mapping is required between the bounding box associated with
each disk block and the actual coordinates on the allocated output tile. This
mapping needs to be carried out after each rhs disk block is read from the
memory. The time required for performing such mapping is proportional to the
number of rhs disk blocks processed by each processor for each tile. Since the
output dataset is actually quite small in our experiments, the number of rhs
disk blocks processed by each processor per tile decreases as we go to larger
congurations. As a result, the time required for this extra processing reduces.
In comparison, the percentage overhead associated with extra checks in each
iteration performed by the compiler generated version remains unchanged.
This dierence explains why the compiler generated code is slower than the
hand coded one, and why the dierence in performance increases as we go to
larger number of processors.
Virtual Microscope (subsampling)103050
# of processors
Execution
Time
Compiled
Original
Fig. 8. Comparison of Compiler and Hand Generated Versions for VMScope2
The results comparing the performance of compiler generated and hand coded
VMScope2 are shown in Figure 8. This code was executed on the same dataset
and query as used for VMScope1. The time taken by the compiler generated
version ranged from 44 seconds on 1 processor to 9 seconds on 1 processor.
The hand coded version took 47 seconds on 1 processor and nearly 5 seconds
on 8 processors. The speedup of the compiler generated version was 2.03, 3.31,
and 4.88 on 2, 4, and 8 processors respectively. The speedup of the hand coded
version was 2.38, 4.98, 10.0 on 2, 4, and 8 processors respectively.
A number of important observations can be made. First, though the same
query is executed for VMScope1 and VMScope2 templates, the execution times
are lower for VMScope2. This is clearly because no computations are performed
in VMScope2. However, a factor of less than two dierence in execution times
shows that both the codes are memory and I/O bound and even in VMScope1,
the computation time does not dominate. The speedups for hand coded version
of VMScope2 are higher. This again is clearly because this code is I/O and
memory bound.
The performance of the compiler generated version was better by 6% on 1
processor, and was lower by 10% on 2 processors, 29% on 4 processors, and
48% on 8 processors. This dierence in performance is again because of the
dierence in tiling strategies used, as explained previously. Since this template
does not perform any computations, the dierence in the conditionals and extra
processing for each disk block has more signicant eect on the overall
performance. In the 1 processor case, the additional processing required for
each disk block becomes so high that the compiler generated version is slightly
faster. Note that the hand coded version was developed for optimized execution
on parallel systems and therefore is not highly tuned for sequential case.
For the 8 processor case, the extra cost of conditional in each iteration becomes
dominant for the compiler generated version. Therefore, the compiler
generated version is almost a factor of 2 slower than the hand coded one.
Bays and Estuaries Simulation System2060100140
# of processors
Execution
Time
Compiled
Original
Fig. 9. Comparison of Compiler and Hand Generated Versions for Bess
The results comparing performance of compiler generated and hand coded
version for Bess are shown in Figure 9. The dataset comprises of a grid with
2113 elements and 46,080 time-steps. Each time-step has 4 4-byte
oating
point numbers per grid element, denoting simulated hydrodynamics parameters
previously computed. Therefore, the memory requirements of the dataset
are 1.6 GB. The Bess template we experimented with performed weighted
averaging of each of the 4 values for each column, over a specied number of
time-steps. The number of time-steps used for our experiments was 30,000.
The execution times for the compiler generated version ranged from 131 seconds
on 1 processor to 11 seconds on 8 processors. The speedup on 2, 4, and
8 processors was 1.98, 5.53, and 11.75 respectively. The execution times for
the hand coded version ranged from 97 seconds on 1 processor to 9 seconds
on 8 processors. The speedup on 2, 4, and 8 processors was 1.8, 5.4, and 10.7
respectively. The compiler generated version was slower by a factor of 25%,
19%, 24%, and 19% on 1, 2, 4, and 8 processors respectively.
We now discuss the factors behind the dierence in performance of compiler
generated and hand coded Bess versions. As with both the VMScope versions,
the compiler generated Bess performs checks for intersecting with the tile for
each pixel. The output for this application is very small, and as a result, the
hand coded version explicitly assumes a single output tile. The compiler generated
version cannot make this assumption and still inserts the conditionals.
The amount of computation associated with each iteration is much higher for
this application. Therefore, the percentage overhead of the extra test is not
as large as the VMScope templates. The second important dierence between
the compiler generated and hand coded versions is how averaging is done. In
the compiler generated code, each value to be added is rst divided by the
total number of values which are being added. In comparison, the hand coded
version performs the summation of all values rst, and then performs a single
division. The percentage overhead of this is independent of the number of processors
used. We believe that the second factor is the dominant reason for the
dierence in performance of two versions. This also explains why the percentage
dierence in performance remains unchanged as the number of processors
is increased. The performance of compiler generated code can be improved by
performing the standard strength reduction optimization. However, the compiler
needs to perform this optimization interprocedurally, which is a topic for
future work.
As an average over these three templates and 1, 2, 4, and 8 processor congu-
rations, the compiler generated versions are 21% slower than hand coded ones.
Considering the high programming eort involved in managing and optimizing
disk accesses and computations on a parallel machine, we believe that a 21%
slow-down from automatically generated code will be more than acceptable
to the application developers. It should also be noted that the previous work
in the area of out-of-core and data intensive compilation has focused only
on evaluating the eectiveness of optimizations, and not on any comparisons
against hand coded versions.
Our analysis of performance dierences between compiler generated and hand
coded versions has pointed us to a number of directions for future research.
First, we need to consider more sophisticated tiling strategies to avoid large
performance penalties associated with performing extra tests during loop ex-
ecution. Second, we need to consider more advanced optimizations like inter-procedural
code motion and interprocedural strength reduction to improve the
performance of compiler generated code.
8 Related Work
Our work on providing high-level support for data intensive computing can
be considered as developing an out-of-core Java compiler. Compiler optimizations
for improving I/O accesses have been considered by several projects. The
PASSION project at Northwestern University has considered several dierent
optimizations for improving locality in out-of-core applications [6, 20]. Some
of these optimizations have also been implemented as part of the Fortran D
compilation system's support for out-of-core applications [29]. Mowry et al.
have shown how a compiler can generate prefetching hints for improving the
performance of a virtual memory system [26]. These projects have concentrated
on relatively simple stencil computations written in Fortran. Besides
the use of an object-oriented language, our work is signicantly dierent in
the class of applications we focus on. Our techniques for executions of loops
are particularly targeted towards reduction operations, whereas previous work
has concentrated on stencil computations. Our slicing based information extraction
for the runtime system allows us to handle applications which require
complex data distributions across processors and disks and for which only
limited information about access patterns may be known at compile-time.
Many researchers have developed aggressive optimization techniques for Java,
targeted at parallel and scientic computations. javar and javab are compilation
systems targeting parallel computing using Java [4]. Data-parallel
extensions to Java have been considered by at least two other projects: Titanium
[36] and HP Java [7]. Loop transformations and techniques for removing
redundant array bounds checking have been developed [12, 25]. Our eort is
also unique in considering persistent storage, complex distributions of data on
processors and disks, and the use of a sophisticated runtime system for optimizing
resources. Other object-oriented data-parallel compilation projects
have also not considered data residing on persistent storage [5, 11, 30].
Program slicing has been actively used for many software engineering applications
like program based testing, regression testing, debugging and software
maintenance over the last two decades [34]. In the area of parallel compi-
lation, slicing has been used for communication optimizations by Pugh and
Rosser [31] and for transforming multiple levels of indirection by Das and
Saltz [15]. We are not aware of any previous work on using program slicing
for extracting information for the runtime system.
Several research projects have focused on parallelizing irregular applications,
such as computational
uid dynamics codes on irregular meshes and sparse
matrix computations. This research has demonstrated that by developing run-time
libraries and through compiler analysis that can place these runtime calls,
such irregular codes can be compiled for e-cient execution [2, 21, 24, 35]. Our
project is related to these eorts in the sense that our compiler also heavily
uses a runtime system. However, our project is also signicantly dierent. The
language we need to handle can have aliases and object references, the applications
involve disks accesses and persistent storage, and the runtime system
we need to interface to works very dierently.
Several runtime support libraries and le systems have been developed to support
e-cient I/O in a parallel environment [13, 14, 22, 33]. They also usually
provide a collective I/O interface, in which all processing nodes cooperate to
make a single large I/O request. Our work is dierent in two important ways.
First, we are supporting a much higher level of programming by involving a
compiler. Second, our target runtime system, ADR, also diers from these
systems in several ways. The computation is an integral part of the ADR
framework. With the collective I/O interfaces provided by many parallel I/O
systems, data processing usually cannot begin until the entire collective I/O
operation completes. Also, data placement algorithms optimized for range
queries are also integrated as part of the ADR framework.
9 Conclusions
In this paper we have addressed the problem of expressing data intensive computations
in a high-level language and then compiling such codes to e-ciently
manage data retrieval and processing on a parallel machine. We have developed
data-parallel extensions to Java for expressing this important class of
applications. Using our extensions, the programmers can specify the computations
assuming that there is a single processor and
at memory.
Conceptually, our compiler design has two major new ideas. First, we have
shown how loop ssion followed by interprocedural program slicing can be
used for extracting important information from general object-oriented data-parallel
loops. This technique can be used by other compilers that use a run-time
system to optimize for locality or communication. Second, we have shown
how the compiler and run-time system can use such information to e-ciently
execute data intensive reduction computations. This technique for processing
such loops is independent of the source language.
These techniques have been implemented in a prototype compiler built using
the Titanium front-end. We have used three templates, from the areas of digital
microscopy and scientic simulations, for evaluating the performance of
this compiler. We have compared the performance of compiler generated code
with the performance of codes developed by customizing the run-time system
ADR manually. Our experiments have shown that the performance of compiler
generated codes is, on the average, 21% slower than the hand coded ones, and
in all cases within a factor of 2. We believe that these results establish that
our approach can be very eective. Considering the high programming eort
involved in managing and optimizing disk accesses and computation on a parallel
machine, we believe that a 21% slow-down from automatically generated
code will be more than acceptable to the application developers. It should also
be noted that the previous work in the area of out-of-core and data intensive
compilation has focused only on evaluating the eect of optimizations, and not
on any comparisons against hand coded versions. Further, we believe that by
considering more sophisticated tiling strategies and other optimizations like
interprocedural code motion and strength reduction, the performance of the
compiler generated codes can be further improved.
Acknowledgments
We are grateful to Chialin Chang, Anurag Acharya, Tahsin Kurc, Alan Sussman
and other members of the ADR team for developing the run-time system,
developing hand customized versions of applications, helping us with the ex-
periments, and for many fruitful discussions we had with them during the
course of this work.
--R
Angelo De- marzo
Interprocedural compilation of irregular applications for distributed memory machines.
A prototype Java restructing compiler.
Distributed pC
A model and compilation strategy for out-of-core data parallel programs
A customizable parallel database for multi-dimensional data
Infrastructure for building parallel database systems for multi-dimensional data
Alan Suss- man
Concurrent aggregates (CA).
The Vesta parallel
Input/Output characteristics of Scalable Parallel Applications.
Paul Havlak
The Virtual Microscope.
High Performance Fortran Forum.
Compiling Fortran D for MIMD distributed-memory machines
Improving the performance of out-of-core computations
Compiling global name-space parallel loops for distributed execution
Coupling multiple simulations via a high performance customizable database system.
Exploiting spatial regularity in irregular iterative applications.
Automatic compiler-inserted i/o prefetching for out-of-core applications
NASA Goddard Distributed Active Archive Center (DAAC).
Compiler support for out-of- core arrays on parallel machines
object-oriented languages
Iteration space slicing and its application to communication optimization.
Speeding up slicing.
A survey of program slicing techniques.
--TR
Dynamic slicing in the presence of unconstrained pointers
Compiling Fortran D for MIMD distributed-memory machines
object-oriented languages
Speeding up slicing
A model and compilation strategy for out-of-core data parallel programs
Interprocedural compilation of irregular applications for distributed memory machines
Input/output characteristics of scalable parallel applications
Index array flattening through program transformation
The Vesta parallel file system
Automatic compiler-inserted I/O prefetching for out-of-core applications
Flow insensitive C++ pointers and polymorphism analysis and its application to slicing
Iteration space slicing and its application to communication optimization
Reuse-driven interprocedural slicing
Concurrent aggregates (CA)
Passion
Distributed Memory Compiler Design For Sparse Problems
Compiling Global Name-Space Parallel Loops for Distributed Execution
Titan
Improving the Performance of Out-of-Core Computations
Infrastructure for Building Parallel Database Systems for Multi-Dimensional Data
Exploiting spatial regularity in irregular iterative applications
Compiler support for out-of-core arrays on parallel machines | data intensive applications;data parallel language;run-time support;compiler techniques |
605734 | Parallel two level block ILU Preconditioning techniques for solving large sparse linear systems. | We discuss issues related to domain decomposition and multilevel preconditioning techniques which are often employed for solving large sparse linear systems in parallel computations. We implement a parallel preconditioner for solving general sparse linear systems based on a two level block ILU factorization strategy. We give some new data structures and strategies to construct a local coefficient matrix and a local Schur complement matrix on each processor. The preconditioner constructed is fast and robust for solving certain large sparse matrices. Numerical experiments show that our domain based two level block ILU preconditioners are more robust and more efficient than some published ILU preconditioners based on Schur complement techniques for parallel sparse matrix solutions. | Introduction
High performance computing techniques, including parallel and distributing computa-
tions, have undergone a gradual maturation process in the past two decades and are now
moving from experimental laboratory studies into many engineering and scientific appli-
cations. Although shared memory parallel computers are relatively easy to program, the
most commonly used architecture in parallel computing practices is that of distributed
Technical Report No. 305-00, Department of Computer Science, University of Kentucky, Lexington,
KY, 2000. This research was supported in part by the U.S. National Science Foundation under grants CCR-
9902022 and CCR-9988165, in part by the University of Kentucky Center for Computational Sciences and
the University of Kentucky College of Engineering.
y E-mail: cshen@cs.uky.edu.
z E-mail: jzhang@cs.uky.edu. URL: http://www.cs.uky.edu/-jzhang.
memory computers, using MPI or PVM for message passing [17, 20]. Even on shared
memory parallel computers, the use of MPI for code portability has made distributed
programming style prevalent. As a result, developing efficient numerical linear algebra
algorithms specifically aiming at high performance computers becomes a challenging issue
[9, 10].
In many numerical simulation and modeling problems, the most CPU consuming part
of the computations is to solve some large sparse linear systems. It is now accepted that,
for solving very large sparse linear systems, iterative methods are becoming the method
of choice, due to their more favorable memory and computational costs, comparing to the
direct solution methods based on Gaussian elimination. One drawback of many iterative
methods is their lack of robustness, i.e., an iterative method may not yield an acceptable
solution for a given problem. A common strategy to enhance the robustness of iterative
methods is to exploit preconditioning techniques. However, most robust preconditioners
are derived from certain type of incomplete LU factorizations of the coefficient matrix and
their efficient implementations on parallel computers are a nontrivial challenge.
A recent trend in parallel preconditioning techniques for general sparse linear systems
is to use ideas from domain decomposition concepts in which a processor is assigned
a certain number of rows of the linear system to be solved. For discussions related to this
point of view and comparisons of different domain decomposition strategies, see [3, 19, 34]
and the references therein. A simple parallel preconditioner can be derived using some
simple parallel iterative methods. Commonly used parallel preconditioners in engineering
computations are point or block Jacobi preconditioners [4, 36]. These preconditioners are
easy to implement, but are not very efficient, in the sense that the number of preconditioned
iterations required to solve realistic problems is still large [35]. A more sophisticated
approach to parallel preconditioning is to use domain decomposition and Schur complement
strategies for constructing parallel preconditioners [34]. Preconditioners constructed
from this approach may be scalable, i.e., the number of preconditioned iterations does
not increase rapidly as the number of processors increases. Some techniques in this class
include various distributed Schur complement methods for solving general sparse linear
systems developed in [2, 5, 28, 27].
For sparse matrices arising from (finite difference) discretized partial differential equations
(PDEs), a level set technique can usually be employed to extract inherent parallelism
from the discretization schemes. If an ILU(0) factorization is performed, then the forward
and backward triangular solves associated with the preconditioning can be parallelized
within each level set. This approach seems most suitable for implementations on shared
memory machines with a small number of processors [11]. For many realistic problems with
unstructured meshes, the parallelism extracted from the level set strategy is inadequent.
Furthermore, ILU(0) preconditioner may not be accurate enough and the subsequent preconditioned
iterations may converge slowly or may not converge at all. Thus, higher
accuracy preconditioners have been advocated by a few authors for increased robustness
[8, 21, 37, 45, 24, 30]. However, higher accuracy preconditioners usually means that more
fill-in entries are kept in the preconditioners and the couplings among the nodes are increased
as well [24]. The increased couplings reduce inherent parallelism and new ordering
techniques must be employed to extract parallelism from higher accuracy preconditioners.
In addition to standard domain decomposition concepts, preconditioning techniques
designed specifically to target parallel computers include sparse approximate inverse and
multilevel treatments [1, 7, 14, 39, 40]. Although claimed as inherently parallel precon-
ditioners, efficient sparse approximate inverse techniques that can be run respectfully on
distributed parallel computers are scarce [9]. Recently, a class of high accuracy preconditioners
that combine the inherent parallelism of domain decomposition, the robustness of
ILU factorization, and the scalability potential of multigrid method have been developed
[30, 31]. The multilevel block ILU preconditioners (BILUM and BILUTM) have been
tested to show promising convergence rate and scalability for solving certain problems.
The construction of these preconditioners are based on block independent set ordering
and recursive block ILU factorization with Schur complements. Although this class of
preconditioners contain obvious parallelism within each level, their parallel implementations
have not yet been reported.
In this study, we mainly address the issue of implementing the multilevel block ILU
preconditioners in a distributed environment using distributed sparse matrix template [26].
The BILUTM preconditioner of Saad and Zhang [31] is modified to be implemented as a
two level block ILU preconditioner on distributed memory parallel architectures (PBILU2).
We used Saad's PSPARSLIB library 1 with MPI as basic communication routines. Our
PBILU2 preconditioner is compared with one of the most favorable Schur complement
based preconditioners of [27] in a few numerical experiments.
This article is organized as follows. In Section 2 some background on block independent
set ordering and the BILUTM preconditioner is given. In Section 3, we outline
the distributed representations of general sparse linear systems. In Section 4, we discuss
the construction of a preconditioner (PBILU2) based on two level block ILU factorization.
Numerical experiments with a comparison of two Schur complement based preconditioners
for solving various distributed linear systems are presented in Section 5 to demonstrate
the merits of our two level block ILU preconditioner. Concluding remarks and comments
on future work are given in Section 6.
Independent Set and BILUTM
Most distributed sparse matrix solvers rely on classical domain decomposition concept to
partition the adjacency graph of the coefficient matrix. There are a few graph partitioning
1 The PSPARSLIB library is available online from http://www.cs.umn.edu/Research/arpa/p sparslib/psp-abs.html.
algorithms and software packages available [16, 18, 22]. Techniques to extract parallelism
from incomplete LU factorizations, such as BILUM and BILUTM, usually relay on the
fact that many rows of a sparse matrix can be eliminated simultaneously at a given stage
of Gaussian elimination. A set consisting of such rows is called an independent set [13].
For large scale matrix computations, the degree of parallelism extracted from traditional
(point) independent set ordering is inadequent and the concept of block independent set
is proposed [30]. Thus a block independent set is a set of groups (blocks) of unknowns
such that there is no coupling between unknowns of any two different groups (blocks) [30].
Various heuristic strategies for finding point independent sets may be extended to find a
block independent set with different properties [30]. A simple and usually efficient strategy
is the so-called greedy algorithm, which groups the nearest nodes together. Considering a
general sparse linear system of the form
where A is an unstructured real-valued matrix of order n. The greedy algorithm (or other
graph partitioners) is used to find a block independent set from the adjacency graph of
the matrix A. Initially, the candidate nodes for a block include all nodes corresponding
to each row of the matrix A. Given a block size k, the greedy algorithm starts from the
first node, groups the nearest k neighboring nodes, and drops the other nodes which are
linked to any of the grouped k nodes into the vertex cover set. Here the vertex cover set is
a set of nodes that have at least one link to at least one node of at least one block of the
block independent set. The process can be repeated for a few times until all the candidate
nodes have gone either into one of the independent blocks or into the vertex cover set. (If
the number of remaining candidate nodes is less than k, all of them are put in the vertex
cover set, and the meaning of the vertex cover set is then generalized to cover this case.)
For detailed algorithm descriptions, see [30]. We remark that it is not necessary that all
independent blocks have the same number of nodes [33]. They are chosen to have the
same cardinality for the sake of load balance in parallel computations and for the sake of
easy programming.
In parallel implementations, a graph partitioner, similar to the greedy algorithm just
described, is first invoked to partition the adjacency graph of A. Based on the resulting
partitioning, the matrix A and the corresponding right hand side and the unknown vectors
b and x are distributed to the individual processors.
Suppose a block independent set with a uniform block size k has been found and the
matrix A is symmetrically permuted into a two by two block matrix of the form
where P is a permutation matrix. diagonal matrix of
dimension ks, where s is the number of uniform blocks of size k. The blocks
are usually dense if k is small. But they are sparse if k is large. In the implementation
of BILUM, an exact inverse technique is used to compute B \Gamma1 by inverting each small
independently. As it is noted in [33], such direct inversion strategy
usually produces dense inverse matrices even if the original blocks are highly sparse
with large size. There have been several sparsification strategies proposed to maintain
the sparsity of B \Gamma1 [33]. In addition, sparse approximate inverse based multilevel block
ILU preconditioners have been proposed in [43]. In this article, we employ an ILU factorization
strategy to compute a sparse incomplete LU factorization of B. The approach is
similar to the one used for BILUTM [31]. The construction of BILUTM preconditioner
is based on a restricted ILU factorization of (2) with a dual dropping strategy (ILUT)
[31]. This multilevel block ILU preconditioner (BILUTM) not only retains the robustness
and flexibility of ILUT, but also is more powerful than ILUT for solving some difficult
problems and offers inherent parallelism that can be exploited on parallel or distributed
architectures.
3 Distributed Sparse Linear System and SLU Preconditioner
A distributed sparse linear system is a collection of sets of equations that assigned to
different processors. The parallel solution of a sparse linear system begins with partitioning
the adjacency graph of the coefficient matrix A. Based on the resulting partitioning, the
data is distributed to processors such that pairs of equations-unknowns are assigned to
the same processor. A type of distributed matrix data structure based on subdomain
decomposition concepts has been proposed in [29, 26], also see [28]. Based on these
concepts, after the matrix A is assigned to each processor, the unknowns in each processor
are divided into three types: (1) interior unknowns that are coupled only with local
equations; (2) local interface unknowns that are coupled with both nonlocal (external)
and local equations; and (3) external interface unknowns that belong to other subdomains
and are coupled with local equations. The submatrix assigned to a certain processor, say,
processor i, is split into two parts: the local matrix A i , which acts on the local variables,
and an interface matrix X i , which acts on the external variables. Accordingly, the local
equations in a given processor can be written as
The local matrix is reordered in such a way that the interface points are listed last after
the interior points. Then we have a local system written in a block format
!/
where N i is the indices of subdomains that are neighbors to the reference subdomain i.
It is exactly the set of processors that the reference processor needs to communicate with
to receive information. The a part of the product X i y i;ext which reflects the
contribution to the local equation from the neighboring subdomain j. The sum of these
contributions is the result of multiplying X i by the external interface unknowns, i.e.,
The preconditioners which are built upon this distributed data structure for the original
matrix will not form an approximation to the global Schur complement explicitly. Some of
such domain decomposition based preconditioners are exploited in [27]. The simplest one
is the additive Schwarz procedure, which is a form of block Jacobi (BJ) iteration, where
the blocks refer to submatrices associated with each subdomains, i.e.,
Even though it can be constructed easily, block Jacobi preconditioning is not robust and
is inefficient, comparing to other Schur complement type preconditioners. One of the best
among these Schur complement based preconditioners is SLU which is the distributed
approximate Schur LU preconditioner [27]. The preconditioning to the global matrix A is
defined in terms of a block LU factorization which involves a solve with the global Schur
complement system at each preconditioning step. Incomplete LU factorization is used in
SLU to approximate the local Schur complements. Numerical results reported in [27] show
that this Schur (I)LU preconditioner demonstrates superior scalability performance over
block Jacobi preconditioner and is more efficient than the latter in terms of parallel run
time.
4 A Class of Two Level Block Preconditioning Techniques
PBILU2 is a two level block ILU preconditioner based on the BILUTM techniques described
in [31]. As we noted before, BILUTM offers a good parallelism and robustness
due to its large size of block independent set. The graph partitioner in BILUTM is the
greedy algorithm for finding a block independent set [30, 31].
4.1 Distributed matrix based on block independent set
In our implementation, the block size of the block independent set must be given before
the search algorithm starts. The choice of the block size k is based on the problem size
and the density of the coefficient matrix A. The choice of k may also depend upon the
number of available processors. Assume that a block independent set with a uniform block
size k has been found, and the coefficient matrix A is permuted into a block form as in
The "small" independent blocks are then divided into several groups according to the
number of available processors. For the sake of load balance on each processor, each group
holds approximate the same number of independent blocks. (The numbers of independent
blocks in different groups may differ at most by 1.) At the same time, the global vector
of unknowns x is split into two subvectors . The right hand side vector b is also
conformally split into subvectors f and g. Such a reordering leads to a block systemB
Bm Fm
um
where m is the number of processors used in the computation. Each block diagonal
contains several independent blocks. Note that the submatrix
F i has the same row numbers with the block submatrix B i . The submatrices E and C
are also divided into m parts according to the load balance criterion in order to have
approximately the same amount of loads in each processor. E i and C i also have the same
row numbers. Those submatrices are assigned to the same processor i. u i
and y i are the local part of the unknown vector, and f i and g i are the local part of the
right hand side vectors. They are partitioned and assigned to a certain processor i at
the same time when the matrix is distributed. When this processor-data assignment is
done, each processor holds several rows of the equations. The local system of equations
in processor i can be written as:
where u is some part of the unknown subvector: which acts on sub-matrices
another part of the unknown subvector:
acts on F i and C i . (Only B i acts on the completely local vector u i .) We take u i and y i
as the local unknowns, but they are not completely interior vectors. So preconditioners
based on this type of block independent set ordering domain decomposition is different
from the straightforward domain decomposition based on a rowwise strip partitioning as
(3) used in [27]. An obvious difference between the partitionings (3) and (5) is that in (5),
the action of F i is not completely local, while it is local in (3). However, since the nature
of the submatrices different in the two decomposition strategies, it is
not easy to say which one is better at this stage.
4.2 Derivation of Schur complement techniques
A key idea in domain decomposition techniques is to develop preconditioners for the
global system (1) by exploiting methods that approximately solve the Schur complement
system in parallel. A parallel construction of the PBILU2 preconditioner based on block
independent set domain decomposition for computing an approximation to the global
Schur complement will be described. For deriving the global Schur complement, another
parts of the coefficient matrix A need to be partitioned and sent to a certain processor.
We rewrite the reordered coefficient matrix in the system (2)
Bm Fm
Thus there are two ways to partition the submatrix E: one is to partition E by rows and
the other is by columns. That is
Those submatrices which will also be assigned to the processor i, have
the same number of columns as the block diagonal submatrix and the
same number of rows as the submatrix C.
Remark 4.1 Here we clear a potential confusion for the two representations of the sub-matrix
E. The row partitioning of E in (4) is used for representing the "local" matrix
A i in the form of (5), which is different from the local matrix A i in (3) and will be kept
throughout the computational process. The column partitioning of E in (6) is just for the
convenience of computing the Schur complement matrix in parallel. The column partitioning
of E is not kept after the construction of the Schur complement matrix. In most cases,
the submatrix E is small and highly sparse, if B is large.
Consider a block LU factorization of (2) in the form of
I 0
!/
where S is the global Schur complement:
Now, suppose we can invert B by some means, we can rewrite Equation (8) as:
FmC A
Each processor can compute one component of the sum
independently,
then partitions the rows of the submatrix M
each part of the
rows must be conformal to each submatrix C m. They are then scattered
to other processors. (There is a global communication needed for scattering.) Finally,
the local part of the Schur complement matrix can be constructed independently in each
processor. The simplest implementation for this approach to constructing the distributed
Schur complement matrix in an incomplete LU factorization is to use a parallel block
restricted IKJ version of Gaussian elimination, similar to the sequential algorithm used
in BILUTM [31]. This method can decrease communications among processors and offers
flexibility in controlling the amount of fill-in during the ILU factorization.
4.3 Parallel restricted Gaussian elimination
BILUTM is a high accuracy preconditioner based on incomplete LU factorization. It utilizes
a dual dropping strategy of ILUT to control the computational and storage (memory)
costs [24]. Its implementation is based on the restricted IKJ version of Gaussian elimi-
nation, discussed in detail in [31]. In the remaining part of this subsection, we outline a
parallel implementation of the restricted IKJ version of Gaussian elimination, using the
distributed data structure discussed in the previous subsection.
On the ith processor, a local submatrix is formed based on those submatrices assigned
to this processor and an ILU factorization on this local matrix will be performed. 2 This
local matrix in processor i looks like
In PBILU2, the "local" matrix A i
means it is stored in the local processor i. It does not necessarily
mean that A i
acts only on interior unknowns.
Note that the submatrix -
C i has the same size as the submatrix C in the Equation (7). In
the submatrix -
only the elements corresponding to the nonzero entries of the submatrix
may not be zero, the others are zero elements. Recalling the permuted matrix in (2)
and in the left hand side of (7), if we let the submatrices C the
submatrix -
C i is then obtained.
We perform a restricted Gaussian elimination on the local matrix (10). This is a
slightly different elimination procedure. First we perform an (I)LU factorization (Gaussian
elimination) to the upper part of the local matrix, i.e., to the submatrix (B i F i ). We then
continue the Gaussian elimination to the lower part (M i -
but the elimination is only
performed with respect to nonzero (and the accepted fill-in) entries of the submatrix M i .
The entries in -
are modified accordingly. When performing these operations on the
lower part, the upper part of the matrix is only accessed, but not modified, see Figure 1.
F
Not processed, not accessed
Processed, not accessed
Accessed, not modified
Not accessed,
not modified
Accessed, not modified
Figure
1: Illustration of the restricted IKJ version of Gaussian elimination. Here the
submatrices
respectively, as
in Equation (10).
After this is done, three kinds of submatrices are formed, which will be used in later
iterations:
1. The upper part of the matrix after the upper part Gaussian elimination is (UB i
so we have L \Gamma1
2. The upper part of the matrix has also been performed an (I)LU factorization for the
block diagonal submatrix B i so that B i - LB i
. Thus we have
We can extract the submatrices LB i
and UB i
from the upper part of the factored
local matrix for later use.
3. In the restricted factorization of the lower part of A i , we obtain a new reduced
submatrix, which is represented by ~
C i and will form a piece of the global Schur
complement matrix.
In fact, this submatrix ~
~
Note that B \Gamma1
F i and the factor matrix L \Gamma1
F i is already available after
the factorization of the upper part of A i . So
can be computed in processor
i by first solving for an auxiliary matrix Q i in UB i
followed by a matrix-matrix
multiplication of M i Q i . However, this part of the computation is done implicitly
in the restricted IKJ Gaussian elimination process, in the sense that all computations in
constructing a piece of the Schur complement matrix S, ~
processor i, is done by a
restricted ILU factorization on the lower part of the local matrix. In other words, ~
C i is
formed without explicit linear system solve or matrix-matrix multiplication. For detailed
computational procedure, see [31].
Considering Equation (9) for Schur complement computation, it can be rewritten in
the form of:
This computation can be done in parallel, thanks to the block diagonal structure of B. If
the Gaussian elimination is an exact factorization, the global Schur complement matrix
can be formed by summing all these submatrices ~
together. That is
Each submatrix ~
parts (use the same partitionings as
the original submatrix C), and corresponding parts are scattered to relevant processors.
After receiving and summing all of those parts of submatrices which have been scattered
from different processors, the "local" Schur complement matrix ~
S i is formed. Here the
"local" means some rows of the global Schur complement that are held in a given processor.
Note that ~
Remark 4.2 The restricted IKJ Gaussian elimination yields a block (I)LU factorization
of the local matrix (10) in the form of
I i
!/
However, the submatrices L \Gamma1
are no longer needed in later computations
and are discarded. This strategy saves considerable storage space and is different from the
current implementation of SLU in the PSPARSLIB library [25, 29, 27].
4.4 Induced global preconditioner
It is possible to develop preconditioners for the global system (1) by exploiting methods
that approximately solve the reduced system (8). These techniques are based on reordering
the global system into a two by two block form (2). Consider the block LU factorization
in the Equation (7), This block factored matrix can be preconditioned by an approximate
LU factorization such as
I 0
where ~
S is an approximation to the global Schur complement matrix S, formed in (12).
Therefore, a global preconditioning operation induced by a Schur complement solve is
equivalent to solving
LU
y
f
by a forward solve with L and a backward substitution with U . The computational
procedure would consist of the following three steps (with ~ g being used as an auxiliary
1. Compute the Schur complement right hand side ~
2. Approximately solve the reduced system ~
3. Back substitution for the u variables, i.e., solve
Each of these steps can be computed in parallel in each processor with some communications
and boundary information exchange among the processors.
As our matrix partitioning approach is different from the one used in [27], it needs
some communications among processors while computing the global Schur complement
right hand side ~
g in each processor. It is easy to see that
~
~
mC A =B @
mC A \GammaB @
So each of the local Schur complement right hand side can be computed in this way:
~
We rewrite the approximate reduced (Schur complement) system ~
~
~
where the submatrix X ij is a boundary matrix which acts on external variables y
There are numerous ways to solve this reduced system. One option considered in [27]
starts by replacing
~
by an approximate system of the form
in which -
S i is the local approximation to the local Schur complement matrix ~
This
formulation can be viewed as a block Jacobi preconditioned version of the Schur complement
system (13). The above system is then solved by an iterative accelerator such as
GMRES which requires a solve with -
S i at each step. In our current implementation, an
ILUT factorization of ~
S i is performed for the purpose of block Jacobi preconditioning.
The third step in the Schur complement preconditioning can be performed without
any problem. Since B is block diagonal, the solution of can be computed
in parallel at each iteration step. In each processor i, we have y, and we
actually solve LB i
y, as the factors LB i
are available. Here we need
to exchange boundary information among the processors, since not all components of y
required by F i is in processor i.
5 Numerical Experiments
In numerical experiments, we compared the performance of the previously described
PBILU2 preconditioner and the distributed Schur complement LU (SLU) preconditioner of
[27] for solving a few sparse matrices from discretized two dimensional convection diffusion
problems, and from application problems in computational fluid dynamics.
The computations were carried out on a 32 processor (200 MHz) subcomplex of a (64
processor) HP Exemplar 2200 (X-Class) supercomputer at the University of Kentucky. It
has 8 super nodes interconnected by a high speed and low latency network. Each super
node has 8 processors attached to it. This supercomputer has a total of 16 GB shared
memory and a theoretical operation speed at 51 GFlops. We used the MPI library for
interprocessor communications. The other (major) parts of the code are mainly written in
Fortran 77 programming language, with a few C routines for handling dynamic allocations
of memory. Many of the communication subroutines and the SLU preconditioner code were
taken from the PSPARSLIB library [25].
In all tables containing numerical results, "n" denotes the dimension of the matrix;
"nnz" represents the number of nonzeros in the sparse matrix; "np" is the number of
processors used; "iter" is the number of preconditioned FGMRES iterations (outer it-
erations); "F-time" is the CPU time in seconds for the preconditioned solution process
with FGMRES; "P-time" is the total CPU time in seconds for solving the given sparse
matrix, starting from the initial distribution of matrix data to each processor from the
master processor (processor 0). P-time does not include the graph partitioning time and
initial permutation time associated with the partitioning, which were done sequentially in
processor 0. Thus, P-time includes matrix distribution, local reordering, preconditioner
construction, and iteration process time (F-time). "S-ratio" stands for the sparsity ratio,
which is the ratio between the number of nonzeros in the preconditioner and the number
of nonzeros in the original matrix A. k is the block size used in PBILU2, "p" is the number
of nonzeros allowed in each of the L and U factors of the ILU factorizations, - is the drop
tolerance. p and - have the same meaning as those used in Saad's ILUT [24].
Both preconditioners use a flexible variant of restarted GMRES (FGMRES) [23] to
solve the original linear system since this accelerator permits a change in the preconditioning
operation at each step, which is our current case, since we used an iterative process
for approximately solving the Schur complement matrix in each outer FGMRES iteration.
The size of the Krylov subspace was set to be 50. The linear systems were formed by assuming
that the exact solution is a vector of unit. Initial guess was some random vectors
with components in (0; 1). The convergence was achieved when the 2-norm residual of
the approximate solution was reduced by 6 orders of magnitude. We used an inner-outer
iteration process. The maximum number of outer preconditioned FGMRES iterations was
500. The inner iteration to solve the Schur complement system used GMRES(5) (without
restart) with a block Jacobi type preconditioner. The inner iteration was stopped when
the 2-norm residual of the inner iteration was reduced by 100, or the number of the inner
iterations was greater than 5.
5.1 5-POINT and 9-POINT matrices
We first compared the parallel performance of different preconditioners for solving some
5-POINT and 9-POINT matrices. The 5-POINT and 9-POINT matrices were generated
by discretizing the following convection diffusion equation
on a two dimensional unit square. Here Re is the so-called Reynolds number. The convection
coefficients were chosen as p(x; exp(\Gammaxy). The right
hand side function was not used since we generated artificial right hand sides for the sparse
linear systems as stated above. The 5-POINT matrices were generated using the standard
central difference discretization scheme. The 9-POINT matrices were generated using a
fourth order compact difference scheme [15]. These two types of matrices have been used
to test BILUM and other ILU type preconditioners in [30, 31, 41].
Most comparison results for parallel iterative solvers report CPU timing results and
iteration numbers. However, in general, it is difficult to make a fair comparison for two
different preconditioning algorithms without listing the resource costs to achieve the given
results. Since the accuracy of the preconditioners is usually influenced by the fill-in entries
kept, the memory (storage) cost of a preconditioner is an important indicator of the
efficiency of a preconditioner. Preconditioners that use more memory space are, in general,
faster than those that use less memory space. A good preconditioner should not use too
much memory space and still achieve fast convergence. To this end, we report in this paper
the number of preconditioned iterations, the parallel CPU time for the preconditioned
solution process, the parallel CPU time for the entire computational process, and the
sparsity ratio.
We first chose Re = 0 and for the 5-POINT matrix. The block size was
chosen as 200, and the dropping parameters were chosen as . For
SLU, we used one level overlapping among the subdomains, as suggested in [27]. The test
results are listed in Table 1. We found that our PBILU2 preconditioner is faster than the
SLU preconditioner of [27] for solving this problem. PBILU2 takes a smaller number of
iterations to converge than SLU did. The convergence rates of both PBILU2 and SLU
are not strongly affected by the number of processors employed, which indicates a good
scalability with respect to the parallel system for these two preconditioners. Moreover,
PBILU2 took much less parallel CPU time than SLU and needed only about a half of the
memory space consumed by SLU to solve this matrix. (See Remark 4.2 for an explanation
on the difference in storage space for PBILU2 and SLU. 3 )
We also tested the same matrix with a smaller value of In this case, we
report two test cases with SLU: one level overlapping of subdomains and nonoverlapping
of subdomains. The test results are listed in Table 2. In our experiments, we found
3 The sparsity ratios for PBILU2 and SLU were measured for all storage spaces used for
storing the preconditioner. It may be the case that some of these storage spaces for SLU
could be released. However, the sparsity ratios for SLU reported in this article were based
on the SLU code distributed in the PSPARSLIB library version 3.0 and was downloaded from
http://www.cs.umn.edu/Research/arpa/p sparslib/psp-abs.html in November 1999.
Table
1: 5-POINT matrix:
One level overlapping for SLU.
Preconditioner np iter F-time P-time S-ratio
PBILU2
SLU 34 34.22 54.37 12.02
PBILU2
Table
2: 5-POINT matrix:
One level overlapping (nonoverlapping for results in brackets) for SLU.
Preconditioner np iter F-time P-time S-ratio
SLU 44 (50) 31.47 51.59 9.15 (9.25)
that overlapping or nonoverlapping of subdomains do not make much difference in terms
of parallel run time. (Only parallel CPU timings for the overlapping cases are reported
in
Table
2.) This observation is in agreement with that made in [27]. However, the
overlapping version of SLU converged faster than the nonoverlapping version. Ironically,
the nonoverlapping version has a slightly larger sparsity ratio. This is because the storage
space for the preconditioner is primarily determined by the dual dropping parameters p
and - . The overlapping makes the local submatrix look larger, thus reduces the sparsity
ratio which is relative to the number of nonzeros of the coefficient matrix. We remark
that PBILU2 is again seen to converge faster and to take less parallel run time than SLU,
overlapped or nonoverlapped, to solve this 5-POINT matrix using the given parameters.
Since the costs and performance of overlapping and nonoverlapping SLU are very
close, we only report results with overlapping version of SLU in the remaining numerical
tests.
Comparing the results of Tables 1 and 2, we see that a higher accuracy PBILU2
preconditioner (using larger p) performed better than a lower accuracy PBILU2 in terms
of iteration counts and parallel run time. The higher accuracy one, of course, takes more
memory space to store.
The SLU preconditioner with overlapping has been tested in [27]. It was compared
with some other preconditioners such as BJ (block Jacobi), SI ("pure" Schur complement
iteration) and SAPINV (distributed approximate block LU factorization with sparse approximate
etc. The numerical experiments in [27] showed that SLU retains its
superior performance over BJ and SI preconditioners and can be comparable with Schur
complement preconditioning (with local Schur complements inverted by SAPINV). How-
ever, our parallel PBILU2 preconditioner is shown to be more efficient than SLU.
Here we explain the communication cost of PBILU2 with some experimental data
corresponding to the numerical results in Table 2. For example, for the total
parallel computation time (P-time) is 18:03 seconds for PBILU2, the communication time
for constructing the Schur complement matrix is only 1:45 seconds. So the communication
for constructing PBILU2 in this case only costs about 8:04% of the total parallel computation
time. For 4, the total parallel computation time (P-time) is 114:25 seconds, the
communication time is 0:72 second. The communication time for constructing PBILU2 is
only about 0:63% of the total parallel computation time. So the cost for communication
in constructing the PBILU2 preconditioner is not high.
We also used larger n and varied Re to generate some larger 5-POINT and 9-POINT
matrices. The comparison results are given in Tables 3 and 4. These results are comparable
with the results listed in Tables 1 and 2. However, the parallel run time (P-time) for SLU in
Tables
3 and 4 increased dramatically (more than tripled) when the number of processors
increased from 24 to 32.
The results for a 5-POINT matrix of is given in Table 5. Once again, we
see that PBILU2 performed much better than SLU. Furthermore, the scalability of SLU is
degenerating for this test problem. The number of iterations is 13 when 4 processors were
used. It increased to 22 when processors were used. For our PBILU2 preconditioner,
the number of iterations is almost constant 12 when the number of processors increased
from 4 to 32. The very large P-time results for SLU, especially for
that the distribution of a large amount of data on this given parallel computer using the
partitioning strategy in SLU may present some problems.
Another set of tests were run for solving the 9-POINT matrix with
. The parallel iteration times (F-time) with respect to different number of
processors for both PBILU2 and SLU are plotted in Figure 2. Once again, PBILU2 solved
this 9-POINT matrix faster than SLU did. In Figure 3, the numbers of preconditioned
FGMRES iterations of PBILU2 and SLU are compared with respect to the number of
Table
3: 9-POINT matrix:
\Gamma4 . One level overlapping for SLU.
Preconditioner np iter F-time P-time S-ratio
SLU 19 16.76 46.66 4.50
PBILU2
SLU 19 24.31 151.24 4.51
Table
4: 5-POINT matrix:
\Gamma4 . One level overlapping for SLU
Preconditioner np iter F-time P-time S-ratio
Table
5: 5-POINT matrix: One
level overlapping for SLU.
Preconditioner np iter S-time P-time S-ratio k
SLU 19 41.80 101.68 7.11
SLU 22 35.22 1630.28 7.10
time
number of processors
dash line - SLU iteration
solid line - PBILU2 iteration
Figure
2: Comparison of parallel iteration time (F-time) for the PBILU2 and SLU preconditioners
for solving a 9-POINT matrix with Parameters used
are
processors employed, to solve the same 9-POINT matrix. Figure 3 indicates that the
convergence rate of PBILU2 improved as the number of processor increased, but the
convergence rate of SLU deteriorated as the number of processors increased.
We summarize the comparison results in this subsection with the 5-POINT and
9-POINT matrices from the finite difference discretized convection diffusion problems.
From the above tests, it can be seen that PBILU2 needs less than half of the storage space
required for SLU, when the parameters are chosen comparably. With more storage space
consumed by SLU, PBILU2 still outperformed SLU with a faster convergence rate and
less parallel run time. Meanwhile, we can see that as the number of processors increases,
the parallel CPU time decreases, the number of iterations is not affected significantly for
PBILU2.
5.2 FIDAP matrices
This set of test matrices were extracted from the test problems provided in the FIDAP
package [12]. 4 As many of these matrices have small or zero diagonals, they are difficult
to solve with standard ILU preconditioners [42]. We tested more than 31 FIDAP matrices
for both preconditioners. We found that PBILU2 can solve more than twice as many
FIDAP matrices as SLU does. In out tests, PBILU2 solved 20 FIDAP matrices and SLU
solved 9. These tests show that our parallel two level block ILU preconditioner is more
4 These matrices are available online from the MatrixMarket of the National Institute of Standards and
Technology at http://math.nist.gov/MatrixMarket.
number
number of processors
dash line - SLU iteration
solid line - PBILU2 iteration
Figure
3: Comparison of the numbers of preconditioned FGMRES iterations for the
PBILU2 and SLU preconditioners for solving a 9-POINT matrix with
Parameters used are
robust than the SLU preconditioner. Our approach has also shown its merits in terms of
its smaller construction and parallel solution costs, smaller memory cost, smaller number
of iterations, compared with SLU preconditioner. For the sake of brevity, we only listed
results for three representative large test matrices in Tables 6 and 7, and in Figure 4.
Note that "-" in Table 6 means that the preconditioned iterative method did not
converge or the number of iterations is greater than 500. We varied the parameters of
fill-in (p) and drop tolerance (-) in Table 6 for both preconditioners and adjusted the size
of block independent set for the PBILU2 approach. PBILU2 is clearly shown to be more
robust than SLU to solve this FIDAP matrix.
FIDAP035 is a matrix larger than FIDAPM29. In this test, we have also adjusted
the fill-in and drop tolerance parameters (p; -) from (50;
SLU and PBILU2. The test results for PBILU2 with convergence are reported in Table 7.
Note that very small - values are required for the ILU factorizations. It seems difficult for
SLU to converge for this test problem for the parameter pairs listed in Table 7 and other
parameter pairs tested. So no SLU results are listed in Table 7.
Figure
4 shows the parallel iteration time (F-time) with respect to the number of
processors for PBILU2 to solve the FIDAP019 matrix. We see that the parallel iteration
time decreased as the number of processors increased, which demonstrates a good
speedup for solving an unstructured general sparse matrix.
Even for some of the FIDAP matrices that both PBILU2 and SLU converge, PBILU2
usually shows superior performance over SLU in terms of the number of iterations and the
Table
Preconditioner np p - iter F-time P-time S-ratio
Table
7: FIDAP035 matrix,
Preconditioner np p - k iter F-time P-time S-ratio
28 1.69 4.13 3.96
iteration
time
number of processors
solid line: PBILU2 iteration
Figure
4: Parallel iteration time (F-time) of PBILU2 for the FIDAP019 matrix with
Parameters used for PBILU2 were
900. The sparsity ratio was approximately 3:45.
Table
8: Flat10a matrix,
Preconditioner np k iter F-time P-time S-ratio
Table
9: Flat30a matrix,
Preconditioner np k iter S-ratio
sparsity ratio.
5.3 Flat matrices
The Flat matrices are from fully coupled mixed finite element discretization of three dimensional
Navier-Stokes equations [4, 44] 5 . Flat10a means that the matrix is from the
first Newton step of the nonlinear iterations, with 10 elements in each of the x and y
coordinate directions, and 1 element in the z coordinate direction. There is only one element
in the z coordinate direction because of the limitation on the computer memory
used to generate these matrices. The same explanation holds for the Flat30a matrix,
which uses 30 elements in each of the x and y coordinate directions. These matrices were
generated to keep the variable structural couplings in the Navier-Stokes equations, so they
may have "nonzero" entries that actually have a "numerical" zero value. Note that these
two matrices are actually symmetric, since they are from the first Newton step where the
velocity vector is set to be zero. However, this symmetry information is not utilized in
our computation.
We see from Tables 8 and 9 that PBILU2 was able to solve these two CFD matrices
with small - values. These two matrices were difficult for SLU to converge. The small
sparsity ratios reflect our previous remark that the two Flat matrices have many numerical
zero entries, which are ignored in the thresholding based ILU factorization, but are counted
towards the sparsity ratio calculations.
Matrices from fully coupled mixed finite element discretizations of Navier-Stokes
equations are notoriously difficult to solve with preconditioned iterative methods [6, 44].
Standard ILU type preconditioners tend to fail or produce unstable factorizations, unless
5 The Flat matrices are available from the second author.
the variables are orders properly [44]. The suitable orderings are not difficult to implement
in sequential environments [6, 44]. It seems, however, a nontrivial task to perform
analogous orderings in a parallel environment.
6 Concluding Remarks and Future Work
We have implemented a parallel two level block ILU preconditioner based on a Schur
complement preconditioning. We discussed the details on the distribution of "small" independent
blocks to form a subdomain in each processor. We gave a computational procedure
for constructing a distributed Schur complement matrix in parallel. We compared our parallel
preconditioner, PBILU2, with a scalable parallel two level Schur LU preconditioner
published recently. Numerical experiments show that PBILU2 demonstrates good
scalability in solving large sparse linear systems on parallel computers. We also found that
PBILU2 is faster and computationally more efficient than SLU in most of our test cases.
PBILU2 is also efficient in terms of memory consumption, since it uses less memory space
than SLU to achieve better convergence rate.
The FIDAP and Flat matrices tested in Sections 5.2 and 5.3 have small or zero
main diagonal entries. The poor convergence performance of both PBILU2 and SLU is
mainly due to the instability associated with ILU factorizations of these matrices. Diagonal
thresholding strategies [32, 38] can be employed in PBILU2 to exclude the rows with small
diagonals from the submatrix B, so that its ILU factorization will be stable. The parallel
implementation of diagonally thresholded PBILU2 will be investigated in our future study.
We plan to extend our parallel two level block ILU preconditioner to truly parallel
multilevel block ILU preconditioners in our future research. We also plan to test our
parallel preconditioners on other emerging high performance computing platforms, such
as on the PC clusters.
--R
An MPI implementation of the SPAI preconditioner on the T3E.
A parallel non-overlapping domain- decomposition algorithm for compressible fluid flow problems on triangulated do- mains
A comparison of some domain decomposition and ILU preconditioned iterative methods for nonsymmetric elliptic problems.
Parallel finite element solution of three-dimensional Rayleigh- ' Benard-Marangoni flows
ParPre: a parallel preconditioners package reference manual for version 2.0.
Preconditioned conjugate gradient methods for the incompressible Navier-Stokes equations
A priori sparsity patterns for parallel sparse approximate inverse precondi- tioners
Towards a cost effective ILU preconditioner with high level fill.
Numerical Linear Algebra for High-Performance Computers
Developments and trends in the parallel solution of linear systems.
Parallelization of the ILU(0) preconditioner for CFD problems on shared-memory computers
FIDAP: Examples Manual
Computer Solution of Large Sparse Positive Definite Systems.
Parallel preconditioning and approximate inverse on the Connection machines.
A single cell high order scheme for the convection-diffusion equation with variable coefficients
The Chaco User's Guide
Scalable Parallel Computing.
Parallel multilevel k-way partitioning scheme for irregular graphs
A comparison of domain decomposition techniques for elliptic partial differential equations and their parallel implementation.
Introduction to Parallel Computing.
Direct Methods for Sparse Matrices.
Partitioning sparse matrices with eigenvectors of graphs.
A flexible inner-outer preconditioned GMRES algorithm
ILUT: a dual threshold incomplete LU preconditioner.
Parallel sparse matrix library (P SPARSLIB): The iterative solvers module.
Iterative Methods for Sparse Linear Systems.
Distributed Schur complement techniques for general sparse linear systems.
Domain decomposition and multi-level type techniques for general sparse linear systems
Design of an iterative solution module for a parallel sparse matrix library (P SPARSLIB).
BILUM: block versions of multielimination and multilevel ILU preconditioner for general sparse linear systems.
BILUTM: a domain-based multilevel block ILUT preconditioner for general sparse matrices
Diagonal threshold techniques in robust multi-level ILU preconditioners for general sparse linear systems
Enhanced multilevel block ILU preconditioning strategies for general sparse linear systems.
Domain Decomposition: Parallel Multilevel Methods for Elliptic Partial Differential Equations.
High performance preconditioning.
Parallel computation of incompressible flows in materials processing: numerical experiments in diagonal preconditioning.
Application of sparse matrix solvers as effective preconditioners.
A multilevel dual reordering strategy for robust incomplete LU factorization of indefinite matrices.
A parallelizable preconditioner based on a factored sparse approximate inverse technique.
A sparse approximate inverse for parallel preconditioning of sparse matrices.
Preconditioned iterative methods and finite difference schemes for convection-diffusion
Preconditioned Krylov subspace methods for solving nonsymmetric matrices from CFD applications.
Sparse approximate inverse and multilevel block ILU preconditioning techniques for general sparse matrices.
Performance study on incomplete LU preconditioners for solving linear systems from fully coupled mixed finite element discretization of 3D Navier-Stokes equations
Use of iterative refinement in the solution of sparse linear systems.
--TR
A comparison of domain decomposition techniques for elliptic partial differential equations and their parallel implementation
High performance preconditioning
Application of sparse matrix solvers as effective preconditioners
Partitioning sparse matrices with eigenvectors of graphs
Introduction to parallel computing
A flexible inner-outer preconditioned GMRES algorithm
Towards a cost-effective ILU preconditioner with high-level fill
Domain decomposition
Parallel computation of incompressible flows in materials processing
Parallel Multilevel series <i>k</i>-Way Partitioning Scheme for Irregular Graphs
Developments and trends in the parallel solution of linear systems
Preconditioned iterative methods and finite difference schemes for convection-diffusion
Distributed Schur Complement Techniques for General Sparse Linear Systems
A Priori Sparsity Patterns for Parallel Sparse Approximate Inverse Preconditioners
Sparse approximate inverse and multilevel block ILU preconditioning techniques for general sparse matrices
Enhanced multi-level block ILU preconditioning strategies for general sparse linear systems
Scalable Parallel Computing
Numerical Linear Algebra for High Performance Computers
Computer Solution of Large Sparse Positive Definite
A Multilevel Dual Reordering Strategy for Robust Incomplete LU Factorization of Indefinite Matrices
Iterative Methods for Sparse Linear Systems
--CTR
Chi Shen , Jun Zhang, A fully parallel block independent set algorithm for distributed sparse matrices, Parallel Computing, v.29 n.11-12, p.1685-1699, November/December
Jun Zhang , Tong Xiao, A multilevel block incomplete Cholesky preconditioner for solving normal equations in linear least squares problems, The Korean Journal of Computational & Applied Mathematics, v.11 n.1-2, p.59-80, January
Chi Shen , Jun Zhang , Kai Wang, Distributed block independent set algorithms and parallel multilevel ILU preconditioners, Journal of Parallel and Distributed Computing, v.65 n.3, p.331-346, March 2005 | schur complement techniques;parallel preconditioning;BILUTM;sparse matrices;domain decomposition |
606070 | Global optimization approach to unequal sphere packing problems in 3D. | The problem of the unequal sphere packing in a 3-dimensional polytope is analyzed. Given a set of unequal spheres and a polytope, the double goal is to assemble the spheres in such a way that (i) they do not overlap with each other and (ii) the sum of the volumes of the spheres packed in the polytope is maximized. This optimization has an application in automated radiosurgical treatment planning and can be formulated as a nonconvex optimization problem with quadratic constraints and a linear objective function. On the basis of the special structures associated with this problem, we propose a variety of algorithms which improve markedly the existing simplicial branch-and-bound algorithm for the general nonconvex quadratic program. Further, heuristic algorithms are incorporated to strengthen the efficiency of the algorithm. The computational study demonstrates that the proposed algorithm can obtain successfully the optimization up to a limiting size. | Introduction
The optimization of the packing of unequal spheres in a 3-dimensional polytope is ana-
lyzed. Given a set of unequal spheres and a polytope, the objective is to assemble them
in such a way that (1) the spheres do not overlap with each other and (2) the sum of the
volumes of the packed spheres is maximized. We note that the conventional 2-dimensional
and 3-dimensional packing problems (also called the bin-packing problem), which have
been extensively studied [2, 7, 11], are fundamentally di#erent from the problem considered
in the present work.
The unequal sphere packing problem has important applications in automated radio-
surgical treatment planning [12, 14]. Stereotactic radiosurgery is an advanced medical
technology for treating brain and sinus tumors. It uses the Gamma knife to deliver a set
of extremely high dose ionizing radiations, called "shots", to the target tumor area [13]. In
good approximation, these shots can be considered as solid spheres. For large or irregular
target regions, multiple shots are used to cover di#erent parts of the tumor. However,
this procedure usually results in (1) large dose inhomogeneities, due to the overlap of the
di#erent shots, and (2) the delivery of large amount of dose to normal tissue arising from
enlargement of the treated region when two or more shots overlap.
Optimizing the number, the position, and the individual sizes of the shots can significantly
reduce both the inhomogeneities and the dose to normal tissue while simultaneously
achieving the required coverage. Unfortunately, since the treatment planning process is
tedious, the quality of the protocol depends heavily on the experience of the users. There-
fore, an automated planning process is desired. To achieve this goal, Wang and Wu et
al. [12, 14] mathematically formulated this planning problem as the packing of spheres
into a 3D region with a packing density greater than a certain given level. This packing
problem was proved to be NP-complete and an approximate algorithm was proposed [12].
In this work we formulate the question as a nonconvex quadratic optimization and present
solution methods based on the branch-and-bound technique.
Let K be the number of di#erent radii of the spheres in the given set and r k
1, ., K) the corresponding radii. There are L available spheres for each radius. Therefore,
the total number of the spheres in the set is KL. Here, we use a single value of L for
simplicity of the presentation. However, this model can be easily modified for the case
where di#erent numbers L k of spheres are available for di#erent radii r k and the total
number of spheres in the given set is
Let the polytope be given by { (x, y, z) # R 3
L be the maximum number of the spheres to be packed.
We designate variables as the location of sphere i in a packing. For
each sphere in the packing, a radius has to be assigned. The variables t ik
1, ., K) are used to handle this task:
sphere i has radius r k ,
With these preliminaries, the optimization can be formulated as follows.
.3
s.t.
# a 2
Constraints (1) and (3) respectively ensure that no two spheres overlap with each other
and that each sphere is centered within the polytope. Constraints (4) and (5) guarantee
that at most one radius is chosen for each sphere, i.e., if t then the sphere i is
packed with radius r k , and if t then the sphere i with radius r k is not packed.
Together with (3), (4) and (5), constraints (2) state that the distance between the center
of a sphere and the boundary of the polytope is at least as large as the radius of that
sphere. Hence, (2) and (3) force all the spheres to be packed inside the polytope.
. By (3), constraint (2) can be rewritten as
for each i, m. (6)
Since the right-hand side of (6) is nonnegative, (6) implies (3). Moreover, the binary 0-1
variables t ik in (5) can be replaced by the inequalities t ik (t ik -
Note that t ik # 1 is implied by the constraint (4). These steps allow restatement of
Problem (P1) as follows.
Note that the constant 4# in the original objective function is omitted here. The numbers
of the variables and the quadratic constraints are (3+K)L and L(L-1)+LK, respectively.
The quadratic function in each constraint (7) is neither convex nor concave.
There exist several algorithms [1, 9] that have been developed for solving the general
nonconvex quadratic program (NQP in short). A NQP can be transformed into a
semidefinite programming problem (SDP) with an additional rank-one constraint. Dropping
the rank-one constraint, one obtains a SDP relaxation problem, which is the tightest
relaxation among all others [3]. Hence, (P2) can be solved by using a branch-and-bound
type method based on the SDP relaxation. However, due to the relative slowness and the
instability of the SDP software, this conceptual algorithm is impractical even for problem
having a small size, since it requires many iterations of the SDP relaxation.
Commonly, methods of solution for the NQP class are designed through linear programming
(LP) relaxation, an approach known as the reformulation-linearization technique
[10]. Under the assumption that additional box constraints for the variables are
new nonlinear constraints are generated from each pair of the box constraints in
order to construct a relaxed convex area of the original nonconvex feasible region. The
nonlinear terms in each nonlinear constraint are accordingly replaced by new variables.
The result of this recasting is a linear program. Based on this technique of linearization,
Al-khayyal et al. [1] proposed a rectangular branch-and-bound algorithm to solve a class
of quadratically constrained programs. Raber [9] proposed another branch-and-bound
algorithm for the same problem based on the use of simplices as the partition elements
and the use of an underestimate a#ne function, whose value at each vertex of the simplex
agrees with that of the corresponding nonconvex quadratic function. This work [9] demonstrated
that the simplicial algorithm often has a better performance over the rectangular
algorithm with respect to the computational time. Interestingly, Raber [9] mentioned
that both the simplicial and the rectangular algorithms exhibit poor performance for a
packing problem :
giving the experimental details. It is obvious that their packing problem is similar to
ours, but has much simpler structures for the constraints.
In this paper, we examine a customization of Raber's simplicial branch-and-bound
algorithm [9] tailored to our problem. It is well known that the underlying simplicial sub-division
is a key factor influencing the quality of the relaxation, therefore, the e#ciency of
the branch-and-bound algorithm. The investigation of the structure of the optimization
suggests (i) an e#cient simplicial subdivision and (ii) di#erent underestimations of the
nonconvex functions. Based on these observations, three variants of the algorithm have
been constructed. The discrete nature of the packing enables a heuristic design for obtaining
good feasible solutions, an outcome which leads to savings on both computational
time and memory size.
The remainder of this paper is organized as follows. In Section 2 we derive the LP
relaxation of the problem with respect to the simplicial subdivision. The simplicial branch-
and-bound algorithm is presented in Section 3. Section 4 gives the heuristic algorithms
and Section 5 presents three variations of the previous algorithm based on the use of
special structures. Section 6 reports the computational results of the proposed branch-
and-bound algorithm. The conclusions of the work are presented in Section 7.
Programming Relaxation
The construction of the LP relaxation of Problem (P2) is the same as the one developed
in [9]. For the sake of a complete description, we outline this procedure below.
the transpose of a vector a. First, we write Problem (P2) in the following form.
d
d ik # R n , and c # R n is the coe#cient vector of the objective
d
correspond to the constraints (7) and (8),
respectively. Av # b represents all linear constraints of (9), (10) and (11). Furthermore,
the matrix Q il can be specified as follows.
il O
O
where O is a matrix having zero for all entries with appropriate size,
.
.
. 1 . -1
. 1 . -1 .
corresponds to the coe#cients of in (7) for i and l, and
. r 2
. r 1 r K r 2 r K . r 2
. r 2
. r 1 r K r 2 r K . r 2
K .
corresponds to those of (t 11 , ., t 1K , ., t L1 , ., t LK ) T in (7) for i and k. Similarly,
R 3L-3L and
d ik # R KL-KL can be written as follows:
O
. O. 0
. O
d
m be the number of the linear constraints in (P2). Denote the polytope defined by
these linear constraints as
m-n and b # R -
.
To construct the LP relaxation problem, we need to represent the matrix Q il (resp.
by the sum of a positive semidefinite matrix C il (resp.
negative semidefinite
matrix D il (resp.
# D ik ). Usually, a spectrum decomposition achieves this goal.
However, we do not need to perform such a task, since the matrices Q il and
possess
special structures that give the decomposition immediately. It is readily seen that the
decompositions
O
il O
O O
and
satisfy the desired property.
Now we consider how to construct our linear programming relaxation problem. Let
be an n-simplex (U #S #), where v i are its vertices. Then
Let W S # R n-n be a matrix which consists of columns (v
each point v in S can be represented as
Through substitution of v in (P2') by (14), we obtain the following equivalent problem.
s.t.
d
By replacing Q il and
with (12) and (13), the quadratic term of the left-hand side
of each constraint of (15) and (16) is divided into a convex and a concave functions
by replacing Q il and
with (12) and (13), respectively. The relaxation of the above
problem is constructed by ignoring the convex part and replacing the concave part with a
linear underestimate function. For such an underestimation, we use the convex envelope
of a concave function f with respect to the simplex S, which is an a#ne function whose
value at each vertex of S coincides with that of f . More precisely, for the quadratic
constraints (15) we have
where
(v
is the convex envelope of (W S #) T D il W S # with respect to S. In a similar fashion, for the
quadratic constraints (17), we have
d
d
where
(v
is the convex envelope of (W S #) T
# D ik W S # with respect to S.
Obviously, an upper bound of the objective function of (P3) can be obtained by solving
the following LP relaxation problem.
d
3 The Simplicial Branch-and-Bound Algorithm
The simplicial branch-and-bound algorithm is presented in this section. As mentioned
above, we use the algorithm in [9] as our prototype. However, two heuristics designed for
obtaining feasible solutions are embedded.
The branching operation is carried out by dividing the current simplex S into two
simplices. Let v i # and v j # be two vertices of S satisfying
where #a# 2 denotes the 2-norm of a vector a. Define
The simplex S is split into two simplices
and
The splitting of the simplices has the property that for each nested sequence {S q } of
simplices,
For further details, see Horst [4, 5]. The
resulting algorithm is presented below.
Branch and Bound Algorithm :
Step 1. Start Heuristic-1 to calculate a possible feasible solution v f and the objective
function value f(v f ). If successful, set
Step 1.1. Let Construct a simplex S 0 which contains the polytope U . Set
Solve the problem (LPR) S0
. If the problem is infeasible, then the
original problem has no solution, stop. Otherwise, let the optimal solution be
and the optimal value be -(S 0 ). Set is a feasible solution of
(P2), stop. Otherwise, start Heuristic-2 with v 0 to calculate a feasible solution v # 0
and the value f(v # 0 ). If LB < f(v # 0 ), then set
Step 1.2. If (UB - LB)/UB #, then stop.
k and S 2
k according to (25) and (28). Set
k }. For
Step 2.1. Solve the problem (LPR) S j
. If it is infeasible, set
let the optimal solution be
k and the optimal value be -(S j
Step 2.2. If v j
k is a feasible solution of (P2), set
Step 2.3. If v j
k is not feasible, then run Heuristic-2 with v j
k to calculate a feasible solution
(v j
and the value f((v j
Otherwise, select a simplex -
-(S). Set S
2.
The details of Heuristic-1 and Heuristic-2 will be given in Section 4. The parameter # gives
the tolerance of the solution obtained by the algorithm. We call the solution obtained
from the above algorithm an #-optimal solution. The convergence of the algorithm is
guaranteed as follows.
Theorem 1 ([6, 8, 9]) If the algorithm generates an infinite sequence {v k }, then every
accumulation point v # of this sequence is an #-optimal solution of Problem (1).
The algorithms Heuristic-1 and Heuristic-2 are described in this section. Recall that
Heuristic-1 finds a feasible solution at the beginning of the branch-and-bound algorithm
and Heuristic-2 generates a feasible solution from an infeasible solution obtained from a
relaxation subproblem.
First, we give the algorithm of Heuristic-1. The basic idea is to place as many spheres
as possible having relatively large radii in the polytope. Let C be a list of the given
set of KL candidate spheres, which are ordered such that r 1 # r KL . Consider a
3-D triangle defined by four constraints arbitrarily chosen from those of the polytope P .
Designate
triangles. Fixing a P i , the algorithm starts
by picking a sphere from the top of C. Then it checks whether the sphere can be located
at one of the four corners of the triangle P i . This procedure is continued for the rest of
the spheres on the list until four spheres are placed or the search of the spheres in the list
is exhausted. All the spheres packed have to satisfy (1)that they touch exactly three sides
of the triangle, (2) that no mutual intersection occurs between each pair of the spheres
(see
Figure
1), and (3) that they satisfy all other constrains on P . After obtaining the
initial packing, the algorithm attempts to insert more spheres between each pair of the
spheres in P i without violating the packing condition (see Figure 2).
Phase I
Figure
1: Figure 2:
Step 1. Determine the center of the sphere l so that it is tangent to three planes corresponding
to the three constraints of P s . If the sphere l satisfies the constraints of
polytope P and does not overlap with the spheres 1, ., k - 1, then pack the sphere
l and set
Step 2. If (i)k = L or (ii) l = KL and k > 0, go to Step 5.
Step 3. If stop. Otherwise set l=1 and
go to Step 1.
Step 4. Set l to Step 1.
Phase II
Step 5. If Otherwise, for each pair of the packed spheres i, j, set l = 1,
repeat Steps 6 and 7.
Step 6. If l > KL stop. Otherwise set
Step 7. Locate the center of the sphere l at M . If it satisfies the constraints P and does
not overlap with the spheres 1, ., k, then pack the sphere l and set
go to Step 6.
By the termination of the algorithm if k > 0 a feasible packing is obtained.
Next, we describe Heuristic-2. Let be the solution
of a relaxation subproblem. Suppose that it is not feasible. Then either t # ik is not integral
or some spheres overlap with each other if the radius is decided by t # each i,
i.e., use r k as the radius of the sphere i (see Figure 3). In the following, we fix the
centers of these spheres and determine their radii so that they do not mutually overlap
(see
Figure
4). Note that for a triplet
means
that no sphere is placed there. Let r 1 > - > r K .
Figure
3: Figure 4:
Heuristic-2
Step 1. Set
Step 2. If the sphere l centered at satisfies the polytope constraints
and does not overlap with spheres 1, ., l - 1, then set t #
(the sphere l centered at
l , z #
l ) has radius r k ), go to Step 4.
Step 3. If k < K, set 2. Otherwise, set t # lk
(no sphere is centered at
l , z #
l )).
Step 4. If l < L, set l 2. Otherwise, stop.
5 The Improvement of the Algorithm
In this section we discuss the special structures of the optimization and present the improvements
on the previous algorithm based on these structures. First, we describe how
to construct the initial simplex.
5.1 Generating the Initial Simplex
To start the branch-and-bound algorithm, we need an initial simplex S 0 which contains
the polytope U . It is generated by the following algorithm.
Generate Initial Simplex:
Step 1. Choose a nondegenerate vertex v 0 of the polytope U .
Step 2. Construct a matrix A # R n-n from the n binding constraints
Step 3. Compute
Step 4. Set S
Remark 1: The n constraints which decide the nondegenerate vertex v 0 can be selected
as follows. For each i we choose three constraints from (9) and all constraints of (11). It
is not di#cult to see that n coe#cient vectors from these constraints are
linearly independent. Set these linear inequality constraints as linear equations, i.e.,
The above system of linear equations determines an unique solution which can be considered
as v 0 .
Remark 2: The vertex v 0 has KL zero entries, which are determined by (31). Further-
more, if an equation in (30) is replaced by (-
#, then the solution obtained
maintains t k. Repeating this process for all equations in (30) yields
3L solutions, which we index as v 1 , ., v 3L . All the other solutions generated by replacing
are denoted by v 3L+1 , ., v n . More precisely, we have
and
5.2 Splitting a Simplex in a Di#erent Way
It is well known that in the simplicial branch-and-bound paradigm both the computational
time and the usage of the memory grow extremely fast as the dimension of the polytope
U increases.
When a simplex S is split into two simplices according to (25) - (28), the length of
is the longest among all other pairs of the vertices of the simplex S. If the
vertex v 0 is not replaced by the vertex v M , then the matrix W S 1 is the same as W S except
the column v i # - v 0 , which is replaced by the column v M - v 0 . Similarly, all columns
in the matrix W S 2 are the same as W S except the column v j # - v 0 , which is replaced
by v M - v 0 . Recall that the entries (i, are zeros in the matrices
(12). Hence, the coe#cients of # j in # Sil (#) of (22),
only depends on the first 3L coordinates of v 0 ,
values for the first 3L coordinates, then # Sil (#) remains unchanged in the constraints
(22) of the subproblem with respect to the simplices S j (j = 1, 2), and the quality of the
relaxation would not be significantly improved. Such a splitting leads to the computation
of subproblems which provide neither good lower nor useful upper bounds. To avoid this
situation, we choose v i # , v j # such that
(v
is the maximum among all other Consequently, the coordinates corresponding
to t ik will not be considered.
One negative e#ect of this choice is that the quality of the relaxation of the constraint
necessarily improved. However, since the linear constraint
is always considered, this defect is not expected to be serious. Accordingly, the number
of subproblems can be decreased in the whole branch-and-bound process.
There is another potential di#culty with the convergence of the algorithm which must
be addressed. Let -
(v
are vertices of S q }. Since we
divide the simplex to minimize the largest value
(v vertices, it is
possible that a nested sequence {S q } of simplices with -
does not satisfy (29). In this case, we are not guaranteed that the accumulation point of
an infinite sequence obtained by the algorithm will be an optimal solution.
5.3 Another Decomposition of the Matrix Q il
As shown in Section 2, to make the underestimation of the quadratic function of the
left-hand side of the quadratic constraint (7), we use the decomposition of (12). From
Replacing t 2
ik by t ik in (7) could result in a di#erent matrix
il O
O
.
. r 2
. r 1 r k r 2 r k . r 2
and
A spectral decomposition of the matrix Q # il provides positive semidefinite and negative
semidefinite matrices which are di#erent from those given in (12). Following a procedure
similar to that in Section 2, we have
where C # il and D # il are positive semidefinite and negative semidefinite, respectively. For
example, if only two magnitudes of radii r 1 and r 2 are considered, then C # t
il and D # t
R KL-KL can be constructed as follows. Let
2 . Then
O
il O
O
where
. r 2(r 2+3r 2+s)
. r 2(r 2+3r 2+s)
. r 1 r 2 r 2
. r 2
. r 2
. r 1 r 2 r 2
and
il .
It can be shown that the eigenvalues of matrix C # t
il are nonnegative (resp.
nonpositive). Hence, the two matrices have the desired properties. Consequently, another
LP relaxation can be constructed based on the new spectral decomposition of
that
remains the same here, so there is no change in its decomposition.
5.4 Another Form of the Relaxation
In this section we focus on a di#erent form of relaxation of problem (P2). Let us omit
the quadratic constraints (8), which are corresponding to the 0 - 1 condition of t ik .
Furthermore, we relax the condition that two spheres can not overlap with each other.
Here the distance between two spheres only needs to be greater than a small positive
value -#. More precisely, consider the problem
s.t.
Remark. One can take the smallest value among all radii given in the set as the magnitude
of - #.
Since only the variables are quadratic in (P4), the convex
envelope of the concave function -(x i - x l
can be determined
by the first 3L coordinates of the vertices of the corresponding simplex as given above.
This implies that it is su#cient to construct simplices in the 3L-dimensional space. Let
be the initial simplex in the 3L-dimensional space that contains the
polytope P . The ith entry of v # j is identical to that of v j for
# be defined
similarly as Q il and A, respectively. Let A # x and A # t be the submatrices of A # , which are
corresponding to
we have the following quadratic program.
s.t.
negative semidefinite matrix, the convex envelope of the quadratic term
(v
Therefore, we obtain the relaxation of (P4) as follows.
Since the constraints (8) are ignored and the constraints (7) are relaxed as (42) in the
above problem, the quality of this relaxation may be inferior to the previous methods.
However, the dimension of the simplices kept in the memory is only 3L. Therefore, the
total memory used may be far smaller than that required in the other methods.
6 The Computational Study
In this section we discuss details of the implementation of the algorithm and report the
experimental results.
Reoptimization : Suppose that S is the simplex at a branching node. The simplex S
is divided into S 1 and S 2 by (25) to (28). If the vertex v 0 is not replaced by v M , then
only columns of matrices W S 1 and W S 2 , are replaced by v M - v 0 ,
respectively. All the other columns remain unchanged. Therefore, in this case, when
two new subproblems are generated by splitting S into S 1 and S 2 , we could use the
information of the optimal solution of the LP relaxation problem associated with S. By
the construction of the LP relaxation problem with respect to S 1 and S 2 , only those
coe#cients corresponding to # i # and # j # in the constraints (22) and (23) are changed due
to the substitutions of the column v M - v 0 in W S 1 and W S 2, respectively. Consequently,
by changing these coe#cients, we could use the reoptimization technique to solve the LP
relaxation problem starting from the optimal solution corresponding to the simplex S.
Test Problems : The test problems are generated as follows. First, construct a simplex
with vertices (0, 0, 10), (10, 0, 0), (0, 10, 0) and (10, 10, 10). Calculate the maximum
inscribed sphere of the simplex (the radius is 2.88675). Then randomly generate
m points on the surface of the sphere. Construct tangent planes {(x, y, z) | a
which pass each of those m points respectively. Let
y, z) | a be the halfspace containing the inscribed sphere.
The intersection of H i with the simplex is the 3-dimensional polytope in which the sphere
packing problem is considered. Hence, the total number of the linear constraints of the
polytope is In our test, m was set at 4. Di#erent pairs of radii of spheres
that we used are shown in Tables 1-2.
Table
1: Combinations of radii for
C r radius
6 1.00 0.50
Table
2: Combinations of radii for
C r radius
The computational experiments were conducted on a DEC Alpha 21164 Workstation
(600MHz). We used CPLEX 6.5.1 as an LP solver for the relaxation problems. The
limits of the memory and the computational time were set at 512MB and 3600 seconds,
respectively. Besides the prototype algorithm given in Section 3, three variations were
also implemented. They were (1) Algorithm NSS, which uses the simplex splitting given
in Section 5.2, (2) Algorithm NMD, which uses the matrix decomposition given in Section
5.3, and (3) Algorithm XYZ, which uses the simplex in the XYZ space given in Section
5.4. For each algorithm, we tested 5 instances for each and each pair C r of radii
from the Table 1. Since the other three algorithms reached either the limit of memory
or the limiting computational time for most of the instances for present
the results of Algorithm XYZ. The data shown in Tables 4-7 and Table 10 represent the
average of the results. The legends used in the tables are given in Table 3.
From
Tables
4-7 we observe that the computational time grows drastically as the
number of spheres packed is increased, since the dimension of the problem is (3
This outcome is consistent with the observation in [1, 9] that the computational time
Table
3: Legends used in the tables
legend meaning
L the maximum number of the spheres packed
C r the combination of the radii
#.LP the number of linear programs solved
#.T the number of cases terminated caused by the limit of CPU
time (3600 seconds)
#.E the number of cases terminated when
#.M the number of cases terminated caused by the limit of memory (512 MB)
Time CPU time (seconds)
UB-LB
the ratio of the values (UB - LB) and UB, where UB and LB is the
upper and lower bounds of the objective function value, respectively
Algorithm ORG the prototype algorithm given in Section 3
Algorithm NSS the algorithm using the simplex splitting given in Section 5.2
Algorithm NMD the algorithm using the spectral decomposition given in Section 5.3
Algorithm XYZ the algorithm using the simplex in the XYZ space in Section 5.4
increases exponentially as the dimension increases. There is no big di#erence between
Algorithm ORG and Algorithm NMD, in the sense of their gross behaviors. Algorithm
NSS solves fewer LPs than the two previously mentioned algorithms do, therefore, it needs
less time. This leads to the solution of a greater number of instances without violating
the limits on either memory or time. Since the simplex splitting is based on the length of
the first 3L coordinates, it avoids solving unnecessary subproblems.
Among all the algorithms, Algorithm XYZ shows the best performance. All instances
for are solved except the case C It even successfully obtained
the solutions for about half of the instances for 5. Two reasons for this behavior
are considered. First, since the dimension of simplices is only 3L, much less memory is
needed to keep the information on the simplices. Second, since the simplices involve only
the coordinates of variables (x, y, z), the simplex splitting has the favorable characteristic
of Algorithm NSS that fewer LPs are needed to be solved.
It should be noted that (i) the dimension of the instances solved in the previous
papers [1, 9] range up to 16 and (ii) the computational time and memory demand of the
simplicial branch-and-bound algorithm usually increases exponentially with the dimension
of the problem. Therefore, a modest increase in the dimension could put the solution out
of range. In contrast, for the sphere packing problem discussed above, the dimension of
the instances solved by Algorithm XYZ extended up to
Hence, we conclude that Algorithm XYZ gains e#ciency, since it takes advantage of the
intrinsic structure of the problem.
Next, let us focus on Algorithm ORG and Algorithm XYZ and consider the influence
of the polytope P on their performance. Five instances of the polytope are generated
randomly and they have same number of constraints. Taking the results are
shown in Tables 8-9. It is observed that even if L and C r are identical, the comparative
behaviors of the algorithms are very sensitive to the shape of the polytope. This behavior
arises since (i) the quality of solutions generated by the heuristics depends on the shape
of the polytope and (ii) the initial simplex S 0 depends on the polytope. For a skinny
polytope, the volume of S 0 \ P 0 would be large, a fact which means that the algorithms
copiously waste e#ort on solving LPs defined on this zone before reaching the optimal
solution.
Table
shows the results when three values of radii are considered, i.e.,
that (i) the value of K e#ects the number of the quadratic constraints in (8), and (ii)
increasing the number of the quadratic constraints enhances the di#culty of the problem.
Comparing this case with the results for (Table 7), we observed that there is no
big change in the numbers of LPs solved. The algorithm is less sensitive to the value of K
than to the size of of L. This occurs, since the numbers of both the quadratic constraints
(7) and (8) grow with increasing of L.
Finally, all of the algorithms exhibited reduced performance when C namely,
when the di#erences of the radii are large.
Conclusions
In this paper, we considered the optimization of unequal sphere packing and demonstrated
the improvements over the existing simplicial branch-and-bound algorithm through advantageous
use of the intrinsic structure of the problem. Specially, the computational
study showed that the improved Algorithm XYZ could solve instances with much larger
size. We observed that optimal solutions were found for many instances when the algorithm
reached the limitations of either computational time or memory. This signals that
Table
4: Results of Algorithm ORG
Time #.E #.T #.M
the algorithm spends a large amount of e#ort verifying the optimality of the solution.
In turn, this behavior indicates that (1) developing an improved method of relaxation is
necessary for solving problems with a larger size, (2) when the algorithm is terminated,
the solution obtained then can be of high quality, and (3) a better heuristic method, which
could start with a feasible solution obtained from the branch-and-bound algorithm, would
be very important for obtaining the approximate solution to larger problems.
Table
5: Results of Algorithm NMD
Time #. E #. T #.M
Table
Results of Algorithm NSS
Table
7: Results of Algorithm XYZ
Time #.E #.T #.M
Table
8: The results of Algorithm ORG
Time
Table
9: The results of Algorithm XYZ
Time
Table
10: The results of Algorithm XYZ
Time #.E #.T #.M
--R
A relaxation method for nonconvex quadratically constrained quadratic programs
Semidefinite programming relaxations for nonconvex quadratic programs
On generalized bisection of n-simplices
Handbook of Global Optimization
Global Optimization - Deterministic Approaches
SIAM Journal on Comput- ing
Global Optimization in Action
A simplicial branch-and-bound method for solving nonconvex all- quadratic programs
A new reformulation-linearization technique for bilinear programming problems
A strip-packing algorithm with absolute performance bound 2
Packing of unequal spheres and automated radiosurgical treatment plan- ning
Physics and dosimetry of the gamma knife
--TR
On three-dimensional packing
On generalized bisection of <italic>n</italic>-simplices
A Strip-Packing Algorithm with Absolute Performance Bound 2
A Simplicial Branch-and-Bound Method for Solving Nonconvex All-Quadratic Programs | LP relaxation;nonconvex quadratic programming;simplicial branch-and-bound algorithm;heuristic algorithms;unequal sphere packing problem |
606457 | Computing iceberg concept lattices with TITANIC. | We introduce the notion of iceberg concept lattices and show their use in knowledge discovery in databases. Iceberg lattices are a conceptual clustering method, which is well suited for analyzing very large databases. They also serve as a condensed representation of frequent itemsets, as starting point for computing bases of association rules, and as a visualization method for association rules. Iceberg concept lattices are based on the theory of Formal Concept Analysis, a mathematical theory with applications in data analysis, information retrieval, and knowledge discovery. We present a new algorithm called TITANIC for computing (iceberg) concept lattices. It is based on data mining techniques with a level-wise approach. In fact, TITANIC can be used for a more general problem: Computing arbitrary closure systems when the closure operator comes along with a so-called weight function. The use of weight functions for computing closure systems has not been discussed in the literature up to now. Applications providing such a weight function include association rule mining, functional dependencies in databases, conceptual clustering, and ontology engineering. The algorithm is experimentally evaluated and compared with Ganter's Next-Closure algorithm. The evaluation shows an important gain in efficiency, especially for weakly correlated data. | Introduction
Since its introduction, Association Rule Mining [1], has become one of the core data mining tasks, and has attracted
tremendous interest among data mining researchers and practitioners. It has an elegantly simple problem statement,
that is, to find the set of all subsets of items (called itemsets) that frequently occur in many database records or
transactions, and to extract the rules telling us how a subset of items influences the presence of another subset.
The prototypical application of associations is in market basket analysis, where the items represent products and
the records the point-of-sales data at large grocery or departmental stores. These kinds of database are generally sparse,
i.e., the longest frequent itemsets are relatively short. However there are many real-life datasets that very dense, i.e.,
they contain very long frequent itemsets.
It is widely recognized that the set of association rules can rapidly grow to be unwieldy, especially as we lower
the frequency requirements. The larger the set of frequent itemsets the more the number of rules presented to the user,
many of which are redundant. This is true even for sparse datasets, but for dense datasets it is simply not feasible to
mine all possible frequent itemsets, let alone to generate rules between itemsets. In such datasets one typically finds an
exponential number of frequent itemsets. For example, finding long itemsets of length or 40 is not uncommon [2].
In this paper we show that it is not necessary to mine all frequent itemsets to guarantee that all non-redundant
association rules will be found. We show that it is sufficient to consider only the closed frequent itemsets (to be defined
later). Further, all non-redundant rules are found by only considering rules among the closed frequent itemsets. The
set of closed frequent itemsets is a lot smaller than the set of all frequent itemsets, in some cases by 3 or more orders
of magnitude. Thus even in dense domains we can guarantee completeness, i.e., all non-redundant association rules
can be found.
The main computation intensive step in this process is to identify the closed frequent itemsets. It is not possible
to generate this set using Apriori-like [1] bottom-up search methods that examine all subsets of a frequent itemset.
Neither is it possible to mine these sets using algorithms for mining maximal frequent patterns like MaxMiner [2]
or Pincer-Search [9], since to find the closed itemsets all subsets of the maximal frequent itemsets would have to be
examined.
We introduce CHARM, an efficient algorithm for enumerating the set of all closed frequent itemsets. CHARM is
unique in that it simultaneously explores both the itemset space and transaction space, unlike all previous association
mining methods which only exploit the itemset search space. Furthermore, CHARM avoids enumerating all possible
subsets of a closed itemset when enumerating the closed frequent sets.
The exploration of both the itemset and transaction space allows CHARM to use a novel search method that skips
many levels to quickly identify the closed frequent itemsets, instead of having to enumerate many non-closed subsets.
Further, CHARM uses a two-pronged pruning strategy. It prunes candidates based not only on subset infrequency (i.e.,
no extensions of an infrequent are tested) as do all association mining methods, but it also prunes candidates based
on non-closure property, i.e., any non-closed itemset is pruned. Finally, CHARM uses no internal data structures like
Hash-trees [1] or Tries [3]. The fundamental operation used is an union of two itemsets and an intersection of two
transactions lists where the itemsets are contained.
An extensive set of experiments confirms that CHARM provides orders of magnitude improvement over existing
methods for mining closed itemsets, even over methods like AClose [14], that are specifically designed to mine closed
itemsets. It makes a lot fewer database scans than the longest closed frequent set found, and it scales linearly in the
number of transactions and also is also linear in the number of closed itemsets found.
The rest of the paper is organized as follows. Section 2 describes the association mining task. Section 3 describes
the benefits of mining closed itemsets and rules among them. We present CHARM in Section 4. Related work is
discussed in Section 5. We present experiments in Section 6 and conclusions in Section 7.
Association Rules
The association mining task can be stated as follows: Let I = f1; 2; ; mg be a set of items, and let
ng be a set of transaction identifiers or tids. The input database is a binary relation - I T . If an
item i occurs in a transaction t, we write it as (i; t) 2 -, or alternately as i-t. Typically the database is arranged
as a set of transaction, where each transaction contains a set of items. For example, consider the database shown in
Figure
1, used as a running example throughout this paper. Here I = fA; C; D;T ; Wg, and 6g.
The second transaction can be represented as fC-2; D-2; W -2g; all such pairs from all transactions, taken together
form the binary relation -.
A set X I is also called an itemset, and a set Y T is called a tidset. For convenience we write an itemset
C; Wg as ACW , and a tidset f2; 4; 5g as 245. The support of an itemset X , denoted (X), is the number of
transactions in which it occurs as a subset. An itemset is frequent if its support is more than or equal to a user-specified
minimum support (minsup) value, i.e., if (X) minsup.
An association rule is an expression X 1
are itemsets, and X 1 \ ;. The support
of the rule is given as (i.e., the joint probability of a transaction containing both X 1 and X 2 ), and the
confidence as (i.e., the conditional probability that a transaction contains X 2 , given that it
contains rule is frequent if the itemset rule is confident if its confidence is greater than
or equal to a user-specified minimum confidence (minconf) value, i.e, p minconf.
The association rule mining task consists of two steps [1]: 1) Find all frequent itemsets, and 2) Generate high
confidence rules.
Finding frequent itemsets This step is computationally and I/O intensive. Consider Figure 1, which shows a
bookstore database with six customers who buy books by different authors. It shows all the frequent itemsets with
and CDW are the maximal-by-inclusion frequent itemsets (i.e., they
are not a subset of any other frequent itemset).
be the number of items. The search space for enumeration of all frequent itemsets is 2 m , which
is exponential in m. One can prove that the problem of finding a frequent set of a certain size is NP-Complete, by
reducing it to the balanced bipartite clique problem, which is known to be NP-Complete [8, 18]. However, if we
assume that there is a bound on the transaction length, the task of finding all frequent itemsets is essentially linear in
the database size, since the overall complexity in this case is given as O(r n 2 l ), where is the number of
transactions, l is the length of the longest frequent itemset, and r is the number of maximal frequent itemsets.
A C D T W
A C D W
A C T W
C D W
A C T W
A C D T W541
ALL FREQUENT ITEMSETS
W, CW
A, D, T, AC, AW
CD, CT, ACW
100%
50% (3) AT, DW, TW, ACT, ATW
Itemsets
Support
CTW,
CDW, ACTW
Items
Transcation
Jane
Austen
Agatha
Christie
Sir Arthur
Conan Doyle
P. G.
Wodehouse
Mark
Twain
Figure
1: Generating Frequent Itemsets
Generating confident rules This step is relatively straightforward; rules of the form X 0 p
are generated
for all frequent itemsets X (where minconf. For an itemset of size k there are
potentially confident rules that can be generated. This follows from the fact that we must consider each subset
of the itemset as an antecedent, except for the empty and the full itemset. The complexity of the rule generation step
is thus O(s 2 l ), where s is the number of frequent itemsets, and l is the longest frequent itemset (note that s can be
O(r 2 l ), where r is the number of maximal frequent itemsets). For example, from the frequent itemset ACW we can
generate 6 possible rules (all of them have support of 4): A 1:0
C, and CW 0:8
A.
3 Closed Frequent Itemsets
In this section we develop the concept of closed frequent itemsets, and show that this set is necessary and sufficient to
capture all the information about frequent itemsets, and has smaller cardinality than the set of all frequent itemsets.
3.1 Partial Order and Lattices
We first introduce some lattice theory concepts (see [4] for a good introduction).
Let P be a set. A partial order on P is a binary relation , such that for all x; the relation is: 1)
Reflexive: x x. 2) Anti-Symmetric: x y and y x, implies y.
x z. The set P with the relation is called an ordered set, and it is denoted as a pair (P; ). We write x < y if
x y and x 6= y.
be an ordered set, and let S be a subset of P . An element u 2 P is an upper bound of S if s u for all
is a lower bound of S if s l for all s 2 S. The least upper bound is called the join of S,
and is denoted as
S, and the greatest lower bound is called the meet of S, and is denoted as
S. If
also write x _ y for the join, and x ^ y for the meet.
An ordered set (L; ) is a lattice, if for any two elements x and y in L, the join x _ y and exist.
L is a complete lattice if
S and
S exist for all S L. Any finite lattice is complete. L is called a join semilattice
if only the join exists. L is called a meet semilattice if only the meet exists.
Let P denote the power set of S (i.e., the set of all subsets of S). The ordered set (P(S); ) is a complete lattice,
where the meet is given by set intersection, and the join is given by set union. For example the partial orders (P(I); ),
the set of all possible itemsets, and (P(T ); ), the set of all possible tidsets are both complete lattices.
The set of all frequent itemsets, on the other hand, is only a meet-semilattice. For example, consider Figure 2,
which shows the semilattice of all frequent itemsets we found in our example database (from Figure 1). For any two
itemsets, only their meet is guaranteed to be frequent, while their join may or may not be frequent. This follows from
AC AW
AT CD CT CW DW TW
ACW
ACT
ACTW
A
(D x 2456)
Figure
2: Meet Semi-lattice of Frequent Itemsets
the well known principle in association mining that, if an itemset is frequent, then all its subsets are also frequent. For
example, is frequent. For the join, while AC _
ACDW is not frequent.
3.2 Closed Itemsets
Let the binary relation - I T be the input database for association mining. Let X I, and Y T . Then the
mappings
define a Galois connection between the partial orders (P(I); ) and (P(T ); ), the power sets of I and T , respec-
tively. We denote a (X; t(X)) pair as X t(X), and a (i(Y Figure 3 illustrates the two
mappings. The mapping t(X) is the set of all transactions (tidset) which contain the itemset X , similarly i(Y ) is the
itemset that is contained in all the transactions in Y . For example, t(ACW In terms
of individual elements
x2X t(x), and i(Y
y2Y i(y). For example
The Galois connection satisfies the following properties (where X;X 1
For example, for 245 2456, we have
Let S be a set. A function c : P(S) 7! P(S) is a closure operator on S if, for all X;Y S, c satisfies the
following properties:
subset X of S is called closed if
I and Y T . Let c it (X) denote the composition of the two mappings
Dually, let c ti (Y are both closure
operators on itemsets and tidsets respectively.
We define a closed itemset as an itemset X that is the same as its closure, i.e., (X). For example the itemset
ACW is closed. A closed tidset is a tidset For example, the tidset 1345 is closed.
The mappings c it and c ti , being closure operators, satisfy the three properties of extension, monotonicity, and
idempotency. We also call the application of i - t or t - i a round-trip. Figure 4 illustrates this round-trip starting
with an itemset X . For example, let then the extension property says that X is a subset of its closure,
since c it we conclude that AC is not
closed. On the other hand, the idempotency property say that once we map an itemset to the tidset that contains
TRANSACTIONS
ITEMS
Y
Figure
3: Galois Connection
it
ITEMS TRANSACTIONS
Figure
4: Closure Operator: Round-Trip
it, and then map that tidset back to the set of items common to all tids in the tidset, we obtain a closed itemset.
After this no matter how many such round-trips we make we cannot extend a closed itemset. For example, after
one round-trip for AC we obtain the closed itemset ACW . If we perform another round-trip on ACW , we get
c it (ACW
For any closed itemset X , there exists a closed tidset given by Y , with the property that
(conversely, for any closed tidset there exists a closed itemset). We can see that X is closed by the fact that
then plugging thus X is closed. Dually, Y is closed. For example,
we have seen above that for the closed itemset ACW the associated closed tidset is 1345. Such a closed itemset and
closed tidset pair X Y is called a concept.
Figure
5: Galois Lattice of Concepts
Figure
Frequent Concepts
A concept X 1 Y 1 is a subconcept of X 2 Y 2 , denoted as
Let B(-) denote the set of all possible concepts in the database, then the ordered set (B(-); ) is a complete lattice,
called the Galois lattice. For example, Figure 5 shows the Galois lattice for our example database, which has a total
of concepts. The least element is the concept C 123456 and the greatest element is the concept ACDTW 5.
Notice that the mappings between the closed pairs of itemsets and tidsets are anti-isomorphic, i.e., concepts with large
cardinality itemsets have small tidsets, and vice versa.
3.3 Closed Frequent Itemsets vs. All Frequent Itemsets
We begin this section by defining the join and meet operation on the concept lattice (see [5] for the formal proof): The
set of all concepts in the database relation -, given by (B(-); ) is a (complete) lattice with join and meet given by
For the join and meet of multiple concepts, we simply take the unions and joins over all of them. For example, consider
the join of two concepts, (ACDW 45) _ (CDT
On the other hand their meet is given as, (ACDW
Similarly, we can perform multiple concept joins or meets; for example, (CT 1356)_
We define the support of a closed itemset X or a concept X Y as the cardinality of the closed tidset
closed itemset or a concept is frequent if its support is at least minsup. Figure 6 shows
all the frequent concepts with tidset cardinality at least 3). The frequent concepts, like the
frequent itemsets, form a meet-semilattice, where the meet is guaranteed to exist, while the join may not.
Theorem 1 For any itemset X , its support is equal to the support of its closure, i.e.,
PROOF: The support of an itemset X is the number of transactions where it appears, which is exactly the cardinality
of the tidset t(X), i.e., it (X))j, to prove the lemma, we have to show that
Since c ti is closure operator, it satisfies the extension property, i.e., t(X) c ti
Thus t(X) t(c it (X)). On the other hand since c it is also a closure operator, X c it (X), which in turn implies that
due to property 1) of Galois connections. Thus
This lemma states that all frequent itemsets are uniquely determined by the frequent closed itemsets (or frequent
concepts). Furthermore, the set of frequent closed itemsets is bounded above by the set of frequent itemsets, and is
typically much smaller, especially for dense datasets (where there can be orders of magnitude differences). To illustrate
the benefits of closed itemset mining, contrast Figure 2, showing the set of all frequent itemsets, with Figure 6, showing
the set of all closed frequent itemsets (or concepts). We see that while there are only 7 closed frequent itemsets, there
are 19 frequent itemsets. This example clearly illustrates the benefits of mining the closed frequent itemsets.
3.4 Rule Generation
Recall that an association rule is of the form X 1
Its support equals its
confidence is given as )j. We are interested in finding all high support (at least minsup) and
high confidence rules (at least minconf).
It is widely recognized that the set of such association rules can rapidly grow to be unwieldy. The larger the set of
frequent itemsets the more the number of rules presented to the user. However, we show below that it is not necessary
to mine rules from all frequent itemsets, since most of these rules turn out to be redundant. In fact, it is sufficient to
consider only the rules among closed frequent itemsets (or concepts), as stated in the theorem below.
Theorem 2 The rule X 1
is equivalent to the rule c it (X 1
PROOF: It follows immediately from the fact that the support of an itemset X is equal to the support of its closure
c it (X), i.e., (X)). Using this fact we can show that
There are typically many (in the worst case, an exponential number of) frequent itemsets that map to the same
closed frequent itemset. Let's assume that there are n itemsets, given by the set S 1 , whose closure is C 1 and m
itemsets, given by the set S 2 , whose closure is C 2 , then we say that all n m 1 rules between two non-closed itemsets
directed from S 1 to S 2 are redundant. They are all equivalent to the rule C 1
. Further the m n 1 rules
directed from S 2 to S 1 are also redundant, and equivalent to the rule C 2
. For example, looking at Figure 2
we find that the itemsets D and CD map to the closed itemset CD, and the itemsets W and CW map to the closed
itemset CW . Considering rules from the former to latter set we find that the rules D 3=4
CD 3=4
are all equivalent to the rule between closed itemsets CD 3=4
CW . On the other hand, if we consider
the rules from the latter set to the former, we find that W 3=5
! D are all equivalent to the
rule CW 5=6
CD.
We should present to the user the most general rules (other rules are more specific; they contain one or more
additional items in the antecedent or consequent) for each direction, i.e., the rules D 3=4
!W and W 3=5
0:6). Thus using the closed frequent itemsets we would generate only 2 rules instead of 8 rules normally
generated between the two sets. To get an idea of the number of redundant rules mined in traditional association
mining, for one dataset (mushroom), at 10% minimum support, we found 574513 frequent itemsets, out of which only
were closed, a reduction of more than 100 times!
4 CHARM: Algorithm Design and Implementation
Having developed the main ideas behind closed association rule mining, we now present CHARM, an efficient algorithm
for mining all the closed frequent itemsets. We will first describe the algorithm in general terms, independent
of the implementation details. We then show how the algorithm can be implemented efficiently. This separation of
design and implementation aids comprehension, and allows the possibility of multiple implementations.
CHARM is unique in that it simultaneously explores both the itemset space and tidset space, unlike all previous
association mining methods which only exploit the itemset space. Furthermore, CHARM avoids enumerating all
possible subsets of a closed itemset when enumerating the closed frequent sets, which rules out a pure bottom-up
search. This property is important in mining dense domains with long frequent itemsets, where bottom-up approaches
are not practical (for example if the longest frequent itemset is l, then bottom-up search enumerates all 2 l frequent
subsets).
The exploration of both the itemset and tidset space allows CHARM to use a novel search method that skips
many levels to quickly identify the closed frequent itemsets, instead of having to enumerate many non-closed subsets.
Further, CHARM uses a two-pronged pruning strategy. It prunes candidates based not only on subset infrequency (i.e.,
no extensions of an infrequent itemset are tested) as do all association mining methods, but it also prunes branches
based on non-closure property, i.e., any non-closed itemset is pruned. Finally, CHARM uses no internal data structures
like Hash-trees [1] or Tries [3]. The fundamental operation used is an union of two itemsets and an intersection of
their tidsets.
A
ACDT
ACDTW
CD
AC
ACD
CT
ACT CDW
AD AT AW CW
ACW ADT ADW ATW
ACTW
ACDW
Figure
7: Complete Subset Lattice
Consider Figure 7 which shows the complete subset lattice (only the main parent link has been shown to reduce
clutter) over the five items in our example database (see Figure 1). The idea in CHARM is to process each lattice node
to test if its children are frequent. All infrequent, as well as non-closed branches are pruned. Notice that the children
of each node are formed by combining the node by each of its siblings that come after it in the branch ordering. For
example, A has to be combined with its siblings C; D;T and W to produce the children AC;AD;AT and AW .
A sibling need not be considered if it has already been pruned because of infrequency or non-closure. While
a lexical ordering of branches is shown in the figure, we will see later how a different branch ordering (based on
support) can improve the performance of CHARM (a similar observation was made in MaxMiner [2]). While many
search schemes are possible (e.g., breadth-first, depth-first, best-first, or other hybrid search), CHARM performs a
depth-first search of the subset lattice.
4.1 CHARM: Algorithm Design
In this section we assume that for any itemset X , we have access to its tidset t(X), and for any tidset Y we have access
to its itemset i(Y ). How to practically generate t(X) or i(Y ) will be discussed in the implementation section.
CHARM actually enumerates all the frequent concepts in the input database. Recall that a concept is given as a
closed itemset, and Y = t(X) is a closed tidset. We can start the search for concepts
over the tidset space or the itemset space. However, typically the number of items is a lot smaller than the number of
transactions, and since we are ultimately interested in the closed itemsets, we start the search with the single items,
and their associated tidsets.
U
U
it it it it
it it it
it
ITEMS TRANSACTIONS
ITEMS TRANSACTIONS
ITEMS TRANSACTIONS
ITEMS TRANSACTIONS
Figure
8: Basic Properties of Itemsets and Tidsets
4.1.1 Basic Properties of Itemset-Tidset Pairs
be a one-to-one mapping from itemsets to integers. For any two itemsets X 1 and X 2 , we say
defines a total order over the set of all itemsets. For example, if f denotes the
lexicographic ordering, then itemset AC < AD. As another example, if f sorts itemsets in increasing order of their
support, then AD < AC if support of AD is less than the support of AC.
Let's assume that we are processing the branch X 1 t(X 1 ), and we want to combine it with its sibling X 2 t(X 2 ).
That is X 1 X 2 (under a suitable total order f ). The main computation in CHARM relies on the following properties.
1. If t(X 1 Thus we can simply replace every
occurrence of X 1 with further consideration, since its closure is identical to the
closure of In other words, we treat as a composite itemset.
2. If t(X 1 Here we can replace every
occurrence of X 1 with occurs in any transaction, then X 2 always occurs there too.
But since t(X 1 generates a different closure.
3. If t(X 1 In this we replace every occurrence
of X 2 with produces a different closure,
and it must be retained.
4. If In this case, nothing can be eliminated;
both X 1 and X 2 lead to different closures.
Figure
8 pictorially depicts the four cases. We see that only closed tidsets are retained after we combine two itemset-
tidset pairs. For example, if the two tidsets are equal, one of them is pruned (Property 1). If one tidset is a subset of
another, then the resulting tidset is equal to the smaller tidset from the parent and we eliminate that parent (Properties
2 and 3). Finally if the tidsets are unequal, then those two and their intersection are all closed.
Example Before formally presenting the algorithm, we show how the four basic properties of itemset-tidset pairs
are exploited in CHARM to mine the closed frequent itemsets.
A x 1345
Figure
9: CHARM: Lexicographic Order
A x 1345
Figure
10: CHARM:Sorted by Increasing Support
Consider Figure 9. Initially we have five branches, corresponding to the five items and their tidsets from our
example database (recall that we used To generate the children of item A (or the pair A 1345) we
need to combine it with all siblings that come after it. When we combine two pairs
the resulting pair is given as In other words we need to perform the intersection of
corresponding tidsets whenever we combine two or more itemsets.
When we try to extend A with C, we find that property 2 is true, i.e., t(C). We can
thus remove A and replace it with AC. Combining A with D produces an infrequent set ACD, which is pruned.
Combination with T produces the pair ACT 135; property 4 holds here, so nothing can be pruned. When we try
to combine A with W we find that t(A) t(W ). According to property 2, we replace all unpruned occurrences
of A with AW . Thus AC becomes ACW and ACT becomes ACTW . At this point there is nothing further to be
processed from the A branch of the root.
We now start processing the C branch. When we combine C with D we observe that property 3 holds, i.e., t(C)
t(D). This means that wherever D occurs C always occurs. Thus D can be removed from further consideration, and
the entire D branch is pruned; the child CD replaces D. Exactly the same scenario occurs with T and W . Both the
branches are pruned and are replaced by CT and CW as children of C. Continuing in a depth-first manner, we next
process the node CD. Combining it with CT produces an infrequent itemset CDT , which is pruned. Combination
with CW produces CDW and since property 4 holds, nothing can be removed. Similarly the combination of CT and
CW produces CTW . At this point all branches have been processed.
Finally, we remove CTW 135 since it is contained in ACTW 135. As we can see, in just 10 steps we have
identified all 7 closed frequent itemsets.
4.1.2 CHARM: Pseudo-Code Description
Having illustrated the workings of CHARM on our example database, we now present the pseudo-code for the algorithm
itself.
The algorithm starts by initializing the set of nodes to be examined to the frequent single items and their tidsets in
Line 1. The main computation is performed in CHARM-EXTEND which returns the set of closed frequent itemsets C.
CHARM-EXTEND is responsible for testing each branch for viability. It extracts each itemset-tidset pair in the
current node set Nodes (X i t(X i ), Line 3), and combines it with the other pairs that come after it (X j t(X j ),
Line 5) according to the total order f (we have already seen an example of lexical ordering in Figure 9; we will look
at support based ordering below). The combination of the two itemset-tidset pairs is computed in Line 6. The routine
CHARM-PROPERTY tests the resulting set for required support and also applies the four properties discussed above.
Note that this routine may modify the current node set by deleting itemset-tidset pairs that are already contained in
other pairs. It also inserts the newly generated children frequent pairs in the set of new nodes NewN . If this set is
non-empty we recursively process it in depth-first manner (Line 8). We then insert the possibly extended itemset X,
of X i , in the set of closed itemsets, since it cannot be processed further; at this stage any closed itemset containing X i
has already been generated. We then return to Line 3 to process the next (unpruned) branch.
The routine CHARM-PROPERTY simply tests if a new pair is frequent, discarding it if it is not. It then tests each
of the four basic properties of itemset-tidset pairs, extending existing itemsets, removing some subsumed branches
from the current set of nodes, or inserting new pairs in the node set for the next (depth-first) step.
CHARM (- I T , minsup):
1.
2. CHARM-EXTEND (Nodes, C)
CHARM-EXTEND (Nodes, C):
3. for each X i t(X i ) in Nodes
4.
5. for each X j t(X j ) in Nodes, with f(j) > f(i)
7. CHARM-PROPERTY(Nodes, NewN)
8. if NewN
9. is not subsumed
CHARM-PROPERTY (Nodes, NewN):
10. if (jYj minsup) then
11. if t(X i
12. Remove X j from Nodes
13. Replace all X i with X
14. else if
15. Replace all X i with X
16. else if
17. Remove X j from Nodes
18. Add XY to NewN
19. else if
20. Add XY to NewN
Figure
11: The CHARM Algorithm
4.1.3 Branch Reordering
We purposely let the itemset-tidset pair ordering function in Line 5 remain unspecified. The usual manner of processing
is in lexicographic order, but we can specify any other total order we want. The most promising approach is to sort the
itemsets based on their support. The motivation is to increase opportunity for non-closure based pruning of itemsets.
A quick look at Properties 1 and 2 tells us that these two situations are preferred over the other two cases. For Property
1, the closure of the two itemsets is equal, and thus we can discard X j and replace X i with . For Property 2,
we can still replace X i with . Note that in both these cases we do not insert anything in the new nodes! Thus
the more the occurrence of case 1 and 2, the fewer levels of search we perform. In contrast, the occurrence of cases 3
and 4 results in additions to the set of new nodes, requiring additional levels of processing. Note that the reordering is
applied for each new node set, starting with the initial branches.
Since we want t(X i that we should sort the itemsets in increasing order of
their support. Thus larger tidsets occur later in the ordering and we maximize the occurrence of Properties 1 and 2. By
similar reasoning, sorting by decreasing order of support doesn't work very well, since it maximizes the occurrence of
Properties 3 and 4, increasing the number of levels of processing.
Example Figure 10 shows how CHARM works on our example database if we sort itemsets in increasing order of
support. We will use the pseudo-code to illustrate the computation. We initialize Nodes = fA 1345; D 2456; T
in Line 1.
At Line 3 we first process the branch A 1345 (we set in Line 4); it will be combined with the remaining
siblings in Line 5. AD is not frequent and is pruned. We next look at A and T ; since t(A) 6= t(T ), we simply insert
AT in NewN . We next find that t(A) t(W ). Thus we replace all occurrences of A with AW (thus
which means that we also change AT in NewN to ATW . Looking at A and C, we find that t(A) t(C). Thus
AW becomes ACW in NewN becomes ACTW . At this point CHARM-EXTEND is
invoked with the non-empty NewN (Line 8). But since there is only one element, we immediately exit after adding
ACTW 135 to the set of closed frequent itemsets C (Line 9).
When we return, the A branch has been completely processed, and we add to C. The other branches
are examined in turn, and the final C is produced as shown in Figure 10. One final note; the pair CTW 135 produced
from the T branch is not closed, since it is subsumed by ACTW 135, and it is eliminated in Line 9.
4.2 CHARM: Implementation Details
We now describe the implementation details of CHARM and how it departs from the pseudo-code in some instances
for performance reasons.
Data Format Given that we are manipulating itemset-tidset pairs, and that the fundamental operation is that of
intersecting two tidsets, CHARM uses a vertical data format, where we maintain a disk-based list for each item,
listing the tids where that item occurs. In other words, the data is organized so that we have available on disk the
tidset for each item. In contrast most of the current association algorithms [1, 2, 3] assume a horizontal database
layout, consisting of a list of transactions, where each transaction has an identifier followed by a list of items in that
transaction.
The vertical format has been shown to be successful for association mining. It has been used in Partition [16],
in (Max)Eclat and (Max)Clique [19], and shown to lead to very good performance. In fact, the Vertical algorithm
[15] was shown to be the best approach (better than horizontal) when tightly integrating association mining with
database systems. The benefits of using the vertical format have further been demonstrated in Monet [12], a new
high-performance database system for query-intensive applications like OLAP and data mining.
Intersections and Subset Testing Given the availability of vertical tidsets for each itemset, the computation of the
tidset intersection for a new combination is straightforward. All it takes is a linear scan through the two tidsets, storing
matching tids in a new tidset. For example, we have
The main question is how to efficiently compute the subset information required while applying the four properties.
At first this might appear like an expensive operation, but in fact in the vertical format, it comes for free.
When intersecting two tidsets we keep track of the number of mismatches in both the lists, i.e., the cases when a
tid occurs in one list but not in the other. Let m(X 1 ) and m(X 2 ) denote the number of mismatches in the tidsets for
itemsets
and
. There are four cases to consider:
For t(A) and t(D) from above, and as we can see, t(A) 6= t(D). Next consider
which shows that t(A) t(W ). Thus
CHARM performs support, subset, equality, and inequality testing simultaneously while computing the intersection
itself.
Eliminating Non-Closed Itemsets Here we describe a fast method to avoid adding non-closed itemsets to the set
of closed frequent itemsets C in Line 9. If we are adding a set X, we have to make sure that there doesn't exist a set
C such that X C and both have the same support (MaxMiner [2] faces a similar problem while eliminating
non-maximal itemsets).
Clearly we want to avoid comparing X with all existing elements in C, for this would lead to a O(jCj 2 ) complexity.
The solution is to store C in a hash table. But what hash function to use? Since we want to perform subset checking,
we can't hash on the itemset. We could use the support of the itemsets for the hash function. But many unrelated
subsets may have the same support.
CHARM uses the sum of the tids in the tidset as the hash function, i.e.,
This reduces the
chances of unrelated itemsets being in the same cell. Each hash table cell is a linked list sorted by support as primary
key and the itemset as the secondary key (i.e., lexical). Before adding X to C, we hash to the cell, and check if X is
a subset of only those itemsets with the same support as X. We found experimentally that this approach adds only a
few seconds of additional processing time to the total execution time.
Optimized Initialization There is only one significant departure from the pseudo-code in Figure 11. Note that if we
initialize the Nodes set in Line 1 with all frequent items, and invoke CHARM-EXTEND then, in the worst case, we
might perform n(n 1)=2 tidset intersections, where n is the number of frequent items. If l is the average tidset size
in bytes, the amount of data read is l n (n 1)=2 bytes. Contrast this with the horizontal approach that reads only
l n bytes.
It is well known that many itemsets of length 2 turn out to be infrequent, thus it is clearly wasteful to perform
To solve this performance problem we first compute the set of frequent itemsets of length 2, and
then we add a simple check in Line 5, so that we combine two items I i and I j only if I i [ I j is known to be frequent.
The number of intersections performed after this check is equal to the number of frequent pairs, which is in practice
closer to O(n) rather than O(n 2 ). Further this check only has to be done initially only for single items, and not in later
stages.
We now describe how we compute the frequent itemsets of length 2 using the vertical format. As noted above we
clearly cannot perform all intersections between pairs of frequent items.
The solution is to perform a vertical to horizontal transformation on-the-fly. For each item I , we scan its tidset
into memory. We insert item I in an array indexed by tid for each T 2 t(I). For example, consider the tidset for item
A, given as We read the first tid insert A in the array at index 1. We also insert A
at indices 3; 4 and 5. We repeat this process for all other items and their tidsets. Figure 12 shows how the inversion
process works after the addition of each item and the complete horizontal database recovered from the vertical tidsets
for each item. Given the recovered horizontal database it is straightforward to update the count of pairs of items using
an upper triangular 2D array.
Add C Add D Add W
Add T
Add A246246246
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
Figure
12: Vertical-to-Horizontal Database Recovery
Memory Management For initialization CHARM scans the database once to compute the frequent pairs of items
(note that finding the frequent items is virtually free in the vertical format; we can calculate the support directly from
an index array that stores the tidset offsets for each item. If this index is not available, computing the frequent items
will take an additional scan). Then, while processing each initial branch in the search lattice it needs to scan single item
tidsets from disk for each unpruned sibling. CHARM is fully scalable for large-scale database mining. It implements
appropriate memory management in all phases as described next.
For example, while recovering the horizontal database, the entire database will clearly not fit in memory. CHARM
handles this by only recovering a block of transactions at one time that will fit in memory. Support of item pairs is
updated by incrementally processing each recovered block. Note that regardless of the number of blocks, this process
requires exactly one database scan over the vertical format (imagine k pointers for each of k tidsets; the pointer only
moves forward if the tid is points to belongs to the current block).
When the number of closed itemsets itself becomes very large, we cannot hope to keep the set of all closed
itemsets C in memory. In this case, the elimination of some non-closed itemsets is done off-line in a post-processing
step. Instead of inserting X in C in Line 9, we simply write it to disk along with its support and hash value. In the
post-processing step, we read all close itemsets and apply the same hash table searching approach described above to
eliminate non-closed itemsets.
Since CHARM processes each branch in the search in a depth-first fashion, its memory requirements are not
substantial. It has to retain all the itemset-tidsets pairs on the levels of the current left-most branches in the search
space. Consider 7 for example. Initially is has to retain the tidsets for fAC;AD;AT ; AWg, fACD;ACT ; ACWg,
fACDTWg. Once AC has been processed, the memory requirement shrinks to fAD;AT ;-
fADTWg. In any case this is the worst possible situation. In practice the applications of
subset infrequency and non-closure properties 1, 2, and 3, prune many branches in the search lattice.
For cases where even the memory requirement of depth-first search exceed available memory, it is straightforward
to modify CHARM to write temporary tidsets to disk. For example, while processing the AC branch, we might have
to write out the tidsets for fAD;AT ; AWg to disk. Another option is to simply re-compute the intersections if writing
temporary results is too expensive.
4.3 Correctness and Efficiency
Theorem 3 (correctness) The CHARM algorithm enumerates all closed frequent itemsets.
PROOF: CHARM correctly identifies all and only the closed frequent itemsets, since its search is based on a complete
subset lattice search. The only branches that are pruned as those that either do not have sufficient support, or those
that result in non-closure based on the properties of itemset-tidset pairs as outlined at the beginning of this section.
Finally CHARM eliminates the few cases of non-closed itemsets that might be generated by performing subsumption
checking before inserting anything in the set of all closed frequent itemsets C.
Theorem 4 (computational cost) The running time of CHARM is O(l jCj), where l is the average tidset length, and
C is the set of all closed frequent itemsets.
PROOF: Note that starting with the single items and their associated tidsets, as we process a branch the following cases
might occur. Let X c denote the current branch and X s the sibling we are trying to combine it with. We prune the X s
branch if t(X c 1). We extend X c to become
Finally a new node is only generated
if we get a new possibly closed set due to properties 3 and 4. Also note that each new node in fact represents a closed
tidset, and thus indirectly represents a closed itemset, since there exists a unique closed itemset for each closed tidset.
Thus CHARM performs on the order of O(jCj) intersections (we confirm this via experiments in Section 6; the only
extra intersections performed are due to case where CHARM may produce non-closed itemsets like CTW 135,
which are eliminated in Line 9). If each tidset is on average of length l, an intersection costs at most 2 l. The total
running time of CHARM is thus 2 l jCj or O(l jCj).
Theorem 5 (I/O cost) The number of database scans made by CHARM is given as O( jCj
is the set of all
closed frequent itemsets, I is the set of items, and is the fraction of database that fits in memory.
PROOF: The number of database scans required is given as the total memory consumption of the algorithm divided
by the fraction of database that will fit in memory. Since CHARM computes on the order of O(jCj) intersections, the
total memory requirement of CHARM is O(l jCj), where l is the average length of a tidset. Note that as we perform
intersections the size of longer itemsets' tidsets shrinks rapidly, but we ignore such effects in our analysis (it is thus
a pessimistic bound). The total database size is l jIj, and the fraction that fits in memory is given as l jIj. The
number of data scans is then given as (l jCj)=( l
Note that in the worst case jCj can be exponential in jIj, but this is rarely the case in practice. We will show in
the experiments section that CHARM makes very few database scans when compared to the longest closed frequent
itemset found.
5 Related Work
A number of algorithms for mining frequent itemsets [1, 2, 3, 9, 10, 13, 16, 19] have been proposed in the past.
Apriori [1] was the first efficient and scalable method for mining associations. It starts by counting frequent items,
and during each subsequent pass it extends the current set of frequent itemsets by one more item, until no more
frequent itemsets are found. Since it uses a pure bottom-up search over the subset lattice (see Figure 7), it generates
all 2 l subsets of a frequent itemset of length l. Other methods including DHP [13], Partition [16], AS-CPA [10], and
DIC [3], propose enhancements over Apriori in terms of the number of candidates counted or the number of data
scans. But they still have to generate all subsets of a frequent itemset. This is simply not feasible (except for very high
support) for the kinds of dense datasets we examine in this paper. We use Apriori as a representative of this class of
methods in our experiments.
Methods for finding the maximal frequent itemsets include All-MFS [8], which is a randomized algorithm, and as
such not guaranteed to be complete. Pincer-Search [9] not only constructs the candidates in a bottom-up manner like
Apriori, but also starts a top-down search at the same time. Our previous algorithms (Max)Eclat and Max(Clique) [19,
17] range from those that generate all frequent itemsets to those that generate a few long frequent itemsets and other
subsets. MaxMiner [2] is another algorithm for finding the maximal elements. It uses novel superset frequency
pruning and support lower-bounding techniques to quickly narrow the search space. Since these methods mine only
the maximal frequent itemsets, they cannot be used to generate all possible association rules, which requires the
support of all subsets in the traditional approach. If we try to compute the support of all subsets of the maximal
frequent itemsets, we again run into the problem of generating all 2 l subsets for an itemset of length l. For dense
datasets this is impractical. Using MaxMiner as a representative of this class of algorithms we show that modifying it
to compute closed itemsets renders it infeasible for all except very high supports.
CD
CT
AT
AD
AC
A
A
AT
Find Generators Compute
Figure
13: AClose Algorithm: Example
AClose [14] is an Apriori-like algorithm that directly mines closed frequent itemsets. There are two main steps in
AClose. The first is to use a bottom-up search to identify generators, the smallest frequent itemsets that determines
a closed itemset via the closure operator c it . For example, in our example database, both c it
c it only A is a generator for ACW . All generators are found using a simple modification of
Apriori. Each time a new candidate set is generated, AClose computes their support, pruning all infrequent ones. For
the remaining sets, it compares the support of each frequent itemset with each of its subsets at the previous level. If the
support of an itemset matches the support of any of its subsets, the itemset cannot be a generator and is thus pruned.
This process is repeated until no more generators can be produced.
The second step in AClose is to compute the closure of all the generators found in the first step. To compute
the closure of an itemset we have to perform an intersection of all transactions where it occurs as a subset, i.e., the
closure of an itemset X is given as c it
is a tid. The closures for all generators can be
computed in just one database scan, provided all generators fit in memory. Nevertheless computing closures this way
is an expensive operation.
Figure
13 shows the working of AClose on our example database. After generating candidate pairs of items, it is
determined that AD and DT are not frequent, so they are pruned. The remaining frequent pairs are pruned if their
support matches the support of any of their subsets. AC;AW are pruned, since their support is equal to the support of
A. CD is pruned because of D, CT because of T , and CW because of W . After this pruning, we find that no more
candidates can be generated, marking the end of the first step. In the second step, AClose computes the closure of all
unpruned itemsets. Finally some duplicate closures are removed (e.g., both AT and TW produce the same closure).
We will show that while AClose is much better than Apriori, it is uncompetitive with CHARM.
A number of previous algorithms have been proposed for generating the Galois lattice of concepts [5, 6]. These
algorithms will have to be adapted to enumerate only the frequent concepts. Further, they have only been studied on
very small datasets. Finally the problem of generating a basis (a minimal non-redundant rule set) for association rules
was discussed in [18] (but no algorithms were given), which in turn is based on the theory developed in [7, 5, 11].
6 Experimental Evaluation
We chose several real and synthetic datasets for testing the performance of CHARM. The real datasets are the same
as those used in MaxMiner [2]. All datasets except the PUMS (pumsb and pumsb*) sets, are taken from the UC
Irvine Machine Learning Database Repository. The PUMS datasets contain census data. pumsb* is the same as
pumsb without items with 80% or more support. The mushroom database contains characteristics of various species
of mushrooms. Finally the connect and chess datasets are derived from their respective game steps. Typically, these
real datasets are very dense, i.e., they produce many long frequent itemsets even for very high values of support. These
datasets are publicly available from IBM Almaden (www.almaden.ibm.com/cs/quest/demos.html).
We also chose a few synthetic datasets (also available from IBM Almaden), which have been used as benchmarks
for testing previous association mining algorithms. These datasets mimic the transactions in a retailing environment.
Usually the synthetic datasets are sparse when compared to the real sets, but we modified the generator to produce
longer frequent itemsets.
Avg. Record Length # Records Scaleup DB Size
chess 76 37 3,196 31,960
connect 130 43 67,557 675,570
mushroom 120 23 8,124 81,240
pumsb* 7117 50 49,046 490,460
pumsb 7117 74 49,046 490,460
Table
1: Database Characteristics
Table
also shows the characteristics of the real and synthetic datasets used in our evaluation. It shows the number
of items, the average transaction length and the number of transactions in each database. It also shows the number
of records used for the scaleup experiments below. As one can see the average transaction size for these databases is
much longer than conventionally used in previous literature.
All experiments described below were performed on a 400MHz Pentium PC with 256MB of memory, running
RedHat Linux 6.0. Algorithms were coded in C++.
6.1 Effect of Branch Ordering
Figure
14 shows the effect on running time if we use various kinds of branch orderings in CHARM. We compare
three ordering methods - lexicographical order, increasing by support, and decreasing by support. We observe that
decreasing order is the worst. On the other hand processing branch itemsets in increasing order is the best; it is about a
factor of 1.5 times better than lexicographic order and about 2 times better than decreasing order. Similar results were
obtained for synthetic datasets. All results for CHARM reported below use the increasing branch ordering, since it is
the best.
Time
per
Closed
(sec)
Minimum Support (%)
chess
Decreasing
Lexicographic
Increasing48121695.596.597.5Time
per
Closed
(sec)
Minimum Support (%)
connect
Decreasing
Lexicographic
Increasing135715253545Time
per
Closed
(sec)
Minimum Support (%)
mushroom
Decreasing
Lexicographic
Increasing0.51.52.53.54.55.5949698Time
per
Closed
(sec)
Minimum Support (%)
pumsb
Decreasing
Lexicographic
Time
per
Closed
(sec)
Minimum Support (%)
pumsb*
Decreasing
Lexicographic
Increasing
Figure
14: Branch Ordering100100001e+06657585Number
of
Elements
Minimum Support (%)
chess
Frequent
Closed
Maximal1001000095.596.597.5Number
of
Elements
Minimum Support (%)
connect
Frequent
Closed
Maximal100100001e+0615253545Number
of
Elements
Minimum Support (%)
mushroom
Frequent
Closed
Maximal101000949698Number
of
Elements
Minimum Support (%)
pumsb
Frequent
Closed
Maximal101000100000455565Number
of
Elements
Minimum Support (%)
pumsb*
Frequent
Closed
Maximal
Figure
15: Set Cardinality481260708090
Longest
Freq
Scans
Minimum Support (%)
chess
chessLF
chessDBI
chessDBL1357995.596.597.5Longest
Freq
Scans
Minimum Support (%)
connect
connectLF
connectDBI
Longest
Freq
Scans
Minimum Support (%)
mushroom
mushroomLF
mushroomDBI
mushroomDBL1357949698Longest
Freq
Scans
Minimum Support (%)
pumsb
pumsbLF
pumsbDBI
pumsbDBL261014455565Longest
Freq
Scans
Minimum Support (%)
pumsb*
pumsb*LF
pumsb*DBI
pumsb*DBL
Figure
Longest Frequent Item-set
vs. Database Scans (D-
BI=Increasing, DBL=Lexical Order)
11001000030507090Total
Time
(sec)
Minimum Support (%)
chess
AClose
CMaxMiner
Charm
Total
Time
(sec)
Minimum Support (%)
connect
AClose
CMaxMiner
Charm
Total
Time
(sec)
Minimum Support (%)
mushroom
AClose
CMaxMiner
Charm
MaxMiner0.1101000758595Total
Time
(sec)
Minimum Support (%)
pumsb
AClose
CMaxMiner
Charm
Total
Time
(sec)
Minimum Support (%)
pumsb*
AClose
CMaxMiner
Charm
MaxMiner10100000.40.81.2Total
Time
(sec)
Minimum Support (%)
AClose
CMaxMiner
Charm
MaxMiner1010000.20.611.4
Total
Time
(sec)
Minimum Support (%)
AClose
CMaxMiner
Charm
MaxMiner101000012Total
Time
(sec)
Minimum Support (%)
AClose
CMaxMiner
Charm
MaxMiner
Figure
17: CHARM versus Apriori, AClose, CMaxMiner and MaxMiner
6.2 Number of Frequent, Closed, and Maximal Itemsets
Figure
15 shows the total number of frequent, closed and maximal itemsets found for various support values. It should
be noted that the maximal frequent itemsets are a subset of the closed frequent itemsets (the maximal frequent itemsets
must be closed, since by definition they cannot be extended by another item to yield a frequent itemset). The closed
frequent itemsets are, of course, a subset of all frequent itemsets. Depending on the support value used the set of
maximal itemsets is about an order of magnitude smaller than the set of closed itemsets, which in turn is an order of
magnitude smaller than the set of all frequent itemsets. Even for very low support values we find that the difference
between maximal and closed remains around a factor of 10. However the gap between closed and all frequent itemsets
grows more rapidly. For example, for mushroom at 10% support, the gap was a factor of 100; there are 558 maximal,
4897 closed and 574513 frequent itemsets.
6.3 CHARM versus MaxMiner, AClose, and Apriori
Here we compare the performance of CHARM against previous algorithms. MaxMiner only mines maximal frequent
itemsets, thus we augmented it by adding a post-processing routine that uses the maximal frequent itemsets to generate
all closed frequent itemsets. In essence we generate all subsets of the maximal itemsets, eliminating an itemset if its
support equals any of its subsets. The augmented algorithm is called CMaxMiner. The AClose method is the only
extant method that directly mines closed frequent itemsets. Finally Apriori mines only the frequent itemsets. It would
require a post-processing step to compute the closed itemsets, but we do not add this cost to its running time.
Figure
17 shows how CHARM compares to the previous methods on all the real and synthetic databases. We find
that Apriori cannot be run except for very high values of support. Even in these cases CHARM is 2 or 3 orders of
magnitude better. Generating all subsets of frequent itemsets clearly takes too much time.
AClose can perform an order of magnitude better than Apriori for low support values, but for high support values
it can in fact be worse than Apriori. This is because for high support the number of frequent itemsets is not too much,
and the closure computing step of AClose dominates computation time. Like Apriori, AClose couldn't be run for very
low values of support. The generator finding step finds too many generators to be kept in memory.
CMaxMiner, the augmented version of MaxMiner, suffers a similar fate. Generating all subsets and testing them
for closure is not a feasible strategy. CMaxMiner cannot be run for low supports, and for the cases where it can be run,
it is 1 to 2 orders of magnitude slower than CHARM.
Only MaxMiner was able to run for all the values of support that CHARM can handle. Except for high support
values, where CHARM is better, MaxMiner can be up to an order of magnitude faster than CHARM, and is typically
a factor of 5 or 6 times better. The difference is attributable to the fact that the set of maximal frequent itemsets is
typically an order of magnitude smaller than the set of closed frequent itemsets. But it should be noted that, since
MaxMiner only mines maximal itemsets, it cannot be used to produce association rules. In fact, any attempt to
calculate subset frequency adds a lot of overhead, as we saw in the case of CMaxMiner.
These experiments demonstrate that CHARM is extremely effective in efficiently mining all the closed frequent
itemsets, and is able to gracefully handle very low support values, even in dense datasets.
6.4 Scaling Properties of CHARM
Figure
shows the time taken by CHARM per closed frequent itemset found. The support values are the same as the
ones used while comparing CHARM with other methods above. As we lower the support more closed itemsets are
found, but the time spent per element decreases, indicating that the efficiency of CHARM increases with decreasing
support.
Figure
19 shows the number of tidset intersections performed per closed frequent itemset generated. The ideal case
in the graph corresponds to the case where we perform exactly the same number of intersections as there are closed
frequent itemsets, i.e., a ratio of one. We find that for both connect and chess the number of intersections performed
by CHARM are close to ideal. CHARM is within a factor of 1.06 (for chess) to 2.6 (for mushroom) times the ideal.
This confirms the computational efficiency claims we made before. CHARM indeed performs O(jCj) intersections.
Figure
shows the number of database scans made by CHARM compared to the length of the longest closed
frequent itemset found for the real datasets. The number of database scans for CHARM was calculated by taking
the sum of the lengths of all tidsets scanned from disks, and then dividing the sum by the tidset lengths for all items
in the database. The number reported is pessimistic in the sense that we incremented the sum even though we may
have space in memory or we may have scanned the tidset before (and it has not been evicted from memory). This
effect is particularly felt for the case where we reorder the itemsets according to increasing support. In this case, the
most frequent itemset ends up contributing to the sum multiple times, even though its tidset may already be cached
(in memory). For this reason, we also show the number of database scans for the lexical ordering, which are much
lower than those for the sorted case. Even with these pessimistic estimates, we find that CHARM makes a lot fewer
database scans than the longest frequent itemset. Using lexical ordering, we find, for example on pumsb*, that the
longest closed itemset is of length 13, but CHARM makes only 3 database scans.0.0010.10.0001 0.001
Time
per
Closed
(sec)
#Closed Frequent Itemsets (in 10,000's)
Real Datasets
chess
connect
mushroom
pumsb
pumsb*
Figure
18: Time per Closed Frequent
#Intersections
per
Closed
#Closed Frequent Itemsets (in 10,000's)
Real Datasets
ideal
chess
connect
mushroom
pumsb
pumsb*
Figure
19: #Intersections per Closed
Total
Time
(sec)
Number of Transactions (in 100,000's)
Synthetic Datasets
Figure
20: Size Scaleup on Synthetic
Total
Time
(sec)
Replication Factor
Real Datasets
Figure
21: Size Scaleup on Real Datasets
Finally in Figures 20 and 21 we show how CHARM scales with increasing number of transactions. For the
synthetic datasets we kept all database parameters constant, and increased the number of transactions from 100K to
1600K. We find a linear increase in time. For the real datasets we replicated the transactions from 2 to 10 times. We
again find a linear increase in running time with increasing number of transactions.
Conclusions
In this paper we presented and evaluated CHARM, an efficient algorithm for mining closed frequent itemsets in large
dense databases. CHARM is unique in that it simultaneously explores both the itemset space and tidset space, unlike
all previous association mining methods which only exploit the itemset space. The exploration of both the itemset
and tidset space allows CHARM to use a novel search method that skips many levels to quickly identify the closed
frequent itemsets, instead of having to enumerate many non-closed subsets.
An extensive set of experiments confirms that CHARM provides orders of magnitude improvement over existing
methods for mining closed itemsets. It makes a lot fewer database scans than the longest closed frequent set found,
and it scales linearly in the number of transactions and also is also linear in the number of closed itemsets found.
Acknowledgement
We would like to thank Roberto Bayardo for providing us the MaxMiner algorithm, as well as the real datasets used
in this paper.
--R
Fast discovery of association rules.
Efficiently mining long patterns from databases.
Dynamic itemset counting and implication rules for market basket data.
Introduction to Lattices and Order.
Formal Concept Analysis: Mathematical Foundations.
Incremental concept formation algorithms based on galois (concept) lattices.
Familles minimales d'implications informatives resultant d'un tableau de donnees binaires.
Discovering all the most specific sentences by randomized algorithms.
A new algorithm for discovering the maximum frequent set.
Mining association rules: Anti-skew algorithms
Implications partielles dans un contexte.
An effective hash based algorithm for mining association rules.
Discovering frequent closed itemsets for association rules.
Integrating association rule mining with databases: alternatives and implications.
An efficient algorithm for mining association rules in large databas- es
Scalable algorithms for association mining.
Theoretical foundations of association rules.
New algorithms for fast discovery of association rules.
--TR
An algorithm for insertion into a lattice: application to type classification
Mining association rules between sets of items in large databases
An incremental concept formation approach for learning from databases
Approximate inference of functional dependencies from relations
On automatic class insertion with overloading
On the inference of configuration structures from source code
Efficiently mining long patterns from databases
Reengineering class hierarchies using concept analysis
Design of class hierarchies based on concept (Galois) lattices
Efficient mining of association rules using closed itemset lattices
A fast algorithm for building lattices
Mining frequent patterns with counting inference
Levelwise Search and Borders of Theories in Knowledge Discovery
Automatic Structuring of Knowledge Bases by Conceptual Clustering
Efficient Discovery of Functional Dependencies and Armstrong Relations
Conceptual Information Systems Discussed through in IT-Security Tool
Discovering Frequent Closed Itemsets for Association Rules
CEM - A Conceptual Email Manager
Conceptual Knowledge Discovery and Data Analysis
Conceptual Knowledge Discovery in Databases Using Formal Concept Analysis Methods
Fast Algorithms for Mining Association Rules in Large Databases
Merging Inheritance Hierarchies for Database Integration
iO2 - An Algorithmic Method for Building Inheritance Graphs in Object Database Design
Towards an Object Database Approach for Managing Concept Lattices
Mining Minimal Non-redundant Association Rules Using Frequent Closed Itemsets
TOSCANA - a Graphical Tool for Analyzing and Exploring Data
Intelligent Structuring and Reducing of Association Rules with Formal Concept Analysis
Mining Ontologies from Text
--CTR
Jean Diatta, A relation between the theory of formal concepts and multiway clustering, Pattern Recognition Letters, v.25 n.10, p.1183-1189, July 2004
Alain Casali , Rosine Cicchetti , Lotfi Lakhal, Extracting semantics from data cubes using cube transversals and closures, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Mei-Ling Shyu , Shu-Ching Chen , Min Chen , Chengcui Zhang, A unified framework for image database clustering and content-based retrieval, Proceedings of the 2nd ACM international workshop on Multimedia databases, November 13-13, 2004, Washington, DC, USA
Bradley J. Rhodes, Taxonomic knowledge structure discovery from imagery-based data using the neural associative incremental learning (NAIL) algorithm, Information Fusion, v.8 n.3, p.295-315, July, 2007
S. Ben Yahia , T. Hamrouni , E. Mephu Nguifo, Frequent closed itemset based algorithms: a thorough structural and analytical survey, ACM SIGKDD Explorations Newsletter, v.8 n.1, p.93-104, June 2006
Xiaodong Liu , Wei Wang , Tianyou Chai , Wanquan Liu, Approaches to the representations and logic operations of fuzzy concepts in the framework of axiomatic fuzzy set theory II, Information Sciences: an International Journal, v.177 n.4, p.1027-1045, February, 2007
Gerd Stumme, Off to new shores: conceptual knowledge discovery and processing, International Journal of Human-Computer Studies, v.59 n.3, p.287-325, September | closure systems;algorithms;formal concept analysis;knowledge discovery;lattices;database analysis |
606523 | Action graphs and coverings. | An action graph is a combinatorial representation of a group acting on a set. Comparing two group actions by an epimorphism of actions induces a covering projection of the respective graphs. This simple observation generalizes and unifies many well-known results in graph theory, with applications ranging from the theory of maps on surfaces and group presentations to theoretical computer science, among others. Reconstruction of action graphs from smaller ones is considered, some results on lifting and projecting the equivariant group of automorphisms are proved, and a special case of the split-extension structure of lifted groups is studied. Action digraphs in connection with group presentations are also discussed. | Introduction
With a group G acting on a set Z we can naturally associate, relative to a subset
(di)graph called the action (di)graph. Its vertices are the elements
of the set Z, with adjacencies being induced by the action of the elements of S on Z.
The denition adopted here is such that a connected action (di)graph corresponds
to a Schreier coset (di)graph, with \repeated generators" and semiedges allowed.
However, to think of an action (di)graph actually as a Schreier coset (di)graph is
much too rigid in many instances. For similar concepts dealing with (di)graphs and
group actions see [1, 2, 3, 8, 12, 17, 18, 21, 22, 28, 43, 45]. Some of them, although
conceptually dierent, bare the same name [45], and some of them, quite close to
our denition, are referred to by a variety of other names [1, 3]. It seems that
the term action (di)graph should be attributed to T. Parsons [43]. For a computer
implementation of (a variant of) action graphs see [44].
Supported in part by \Ministrstvo za znanost in tehnologijo Slovenije", proj.no. J1-0496-99.
Group actions are compared by morphisms. The starting observation of this
paper is that an epimorphism between two actions invokes a covering projection of
the respective action graphs. Surprisingly enough, this simple result does not seem
to have been explicitly stated so far, although there are many well-known special
cases with numerous applications.
For instance, it is generally known that Schreier coset (di)graphs are actually
covering (di)graphs, and that a Schreier coset (di)graph is regularly covered by its
corresponding Cayley (di)graph. These facts are commonly used as background
results in the theory of group presentations [12, 13, 28, 31, 32, 14, 48, 52], and have
recently been applied in the design and analysis of interconnection networks and
parallel architectures [1, 2, 3, 22], among others. Coverings of Cayley graphs are
frequently employed to construct new graphs with variuos types of symmetry and
other graph-theoretical properties [9, 19, 34, 47] as well as to prove that a subgroup
cannot have genus greater than the group itself [7, 19]. As for the maps on surfaces
[4, 5, 10, 16, 19, 20, 25, 26, 27, 30, 35, 36, 40, 41, 46], one of the many combinatorial
approaches to this topic is by means of a Schreier representation [25, 40]. Such a
representation is actually a certain action graph in disguise, and homomorphisms
of maps correspond to covering projections of the respective action graphs. Some
important basic facts can be elegantly derived along these lines.
We here give a unied approach to all these diverse topics, and in addition, we
derive certain results which appear to be new. Section 2 is preliminary. Action
graphs are introduced formally in Section 3, and covering projections induced by
morphisms of actions in Section 4. Further basic properties of such coverings are
discussed in Sections 5 and 6. In Section 7 we brie
y consider automorphism groups
of action graphs. Section 8 is devoted to lifting and projecting automorphisms, with
focus on the equivariant group. In Section 9 we determine the group of covering
transformations, and apply some results of [35] to obtain conditions for a natural
splitting of a lifted group of map automorphisms (valid also if the map homomorphism
is not valency preserving [35, 36]). In Section 10 we treat action digraphs in
connection with group presentations. The lifting problem along a regular covering
(of graphs as well as of general topological spaces) is reduced to a question about
action digraphs.
Preliminaries: Group actions, Graphs and Coverings
By an ordered pair (Z; G) we denote a group G acting on the right on a nonempty
set Z. (For convenience we omit the dot sign indicating the action.) A morphism
of actions is an ordered pair
function and : G ! G 0 is a homomorphism such that (u
Morphisms are composed on the left. Left actions and their morphisms are dened
similarly. Morphisms of the form (; id) : (Z; are called equivariant,
and morphisms of the form (id; are called invariant. Invariant
epimorphisms formalize the intuitive notion of \groups, acting in the same way on
a given set".
We say that an action ( ~
covers an action (Z; G) whenever there exists
an epimorphism (;
This terminology is justied by the
fact that the cardinality j 1 (z)j depends just on the orbit of G to which z 2 Z
belongs. A covering of actions
can be decomposed into
an equivariant covering
followed by an invariant covering
where the action of ~
G on Z is dened by z ~
Proposition 2.1 There exists a covering ( ~
of transitive actions if
and only if there exists, for a xed chosen ~ b 2 ~
Z, a group
such that q( ~
Z. The corresponding onto mapping of sets is
then given by ~ b;b ( ~ b~g) := bq(~g). In particular, two transitive actions are isomorphic
if and only if there exists an isomorphism between the respective groups mapping a
stabilizer onto a stabilizer.
Example 2.2 Let H H 0 G and K/G. The group G acts by right multiplication
on the set of right cosets HjG. Similarly, the quotient group G=K acts on the
set of right cosets H 0 KjG. There is an obvious covering of actions (HjG;
In particular, the regular action (G; G) r of G on itself by right
multiplication covers any transitive action of G=K.
Example 2.3 There is an equivariant isomorphism representing a transitive action
of a group G as an action on the cosets of a stabilizer. Moreover, all transitive and
faithful quotient actions of G can be treated in a similar fashion.
Indeed, a conjugacy class C of subgroups in G determines the action of G=core(C)
on the cosets of an element of C. This action is transitive and faithful. Conversely,
Q be a group epimorphism and let (Z; Q) be transitive and faithful.
Dene the action (Z; G) such that (id; is an invariant covering,
and let G Q
b be a stabilizer of (Z; G). Let
jG; G) be the
standard representation of (Z; G). Then
jG; Q) is the standard
representation of (Z; Q). It follows that (Z; Q) determines a conjugacy class C Q in
G with core(C Q q. Thus, the isomorphism classes of transitive and faithful
actions of quotient groups of G are in natural correspondence with conjugacy classes
of subgroups in G. Moreover, covering satisfying
is a morphism. That is, such a
covering exists if and only if G Q
b is contained in a conjugate subgroup of G Q 0
b . See
Examples 4.7 and 4.8 for an application.
By Aut(Z; G) we denote the automorphism group of (Z; G). An automorphism
of G is called admissible whenever there exists a bijection on Z such that
G). The group of admissible automorphisms is denoted by AdmZ G.
By Aut(Z)G we denote the equivariant group of the action, formed by all bijections
on Z for which (; id) 2 Aut(Z; G). If (Z; G) is transitive, then Aut(Z)G can be
computed explicitly relative to a point of reference b 2 Z as
. Also, the left action of Aut(Z)G on Z is
xed-point free, and is transitive if and only G acts with a normal stabilizer. Hence
if G is, in addition, faithful, then Aut(Z)G is regular if and only if G is regular. In
this case, (Z; G) is essentially the right multiplication (G; G) r , whereas (Aut(Z)G ; Z)
is essentially the left multiplication (G; G) l .
A graph is an ordered 4-tuple are disjoint
nonempty sets of darts and vertices, respectively, beg is an onto mapping
which assigns to each dart x its initial vertex beg x, and inv : D ! D is an involution
which interchanges every dart x and its inverse x x. For notational
convenience we use beg and inv just as symbolic names denoting the actual concrete
functions. The terminal vertex end x of a dart x is the initial vertex of x 1 . The
orbits of beg are called edges. An edge is called a semiedge if x
and it is called a link otherwise. Walks are
dened as sequences of darts in the obvious way. By
we denote the set of all walks and the set of all u-based closed walks of a graph X,
respectively. By recursively deleting all consecutive occurrences of a dart and its
inverse in a given walk we obtain its reduction. Two walks with the same reduction
are called homotopic. The naturally induced operation in set of all reduced u-based
closed walks denes the fundamental group morphism of graphs
. For convenience we write
are the appropriate restrictions. Graph morphisms are composed on the left.
A graph
called a covering projection if, for every
vertex ~
X , the set of darts with ~
u as the initial vertex is bijectively mapped
onto the set of darts with the initial vertex p(~u). The graph X is called the base
graph and ~
X the covering graph. By b we denote the
bre over the vertex u and the dart x of X, respectively. A morphism of covering
projections is an ordered pair (f; ~
f) of graph morphisms f
~
f . An equivalence of covering projections p and
0 of the same base graph is a morphism of the the form (id; ~
f is graph
isomorphism. Equivalence of covering projections dened on the same covering
graph is dened similarly. An automorphism of
is of course a pair of
automorphisms ( ~
f . The automorphism ~
f is called a lift of f ,
and f the projection of ~
f . In particular, all lifts of the identity automorphism form
the group CT(p) of covering transformations. If the covering graph (and hence the
base graph) is connected, then CT(p) acts semiregularly on vertices and on darts
of ~
X. The covering projection of connected graphs is regular whenever CT(p) acts
regularly on each bre.
There exists an action of the set of walks W on the vertex-set of ~
dened
by ~ u
W is the unique lift of W such that beg
u. In
other words, we have (~u
u. The mapping
~
u W denes a bijection b beg W ! b end W . Homotopic walks have the
same action. In particular, W u and u have the same action on b u . The walk-
action implies that coverings (of connected graphs) can be studied from a purely
combinatorial point of view [35]. A voltage space on a connected graph
dened by an action of a voltage group on a set F , called
the abstract bre, and by an assignment : D ! such that x This
assignment extends to a homomorphism carrying
the same voltage. The group Loc called the local group
at the vertex u. As the graph is assumed connected, the local groups at distinct
vertices are conjugate subgroups, and if any of them is transitive we call such a
voltage space locally transitive. With every voltage space on a connected
graph inv) we can associate a covering X. The
graph
as the vertex-set and ~
as the dart-
set. The incidence function is beg(x; and the switching involution
inv is given by (x; The covering graph is connected if and
only if the voltage space is locally transitive. In particular, the Cayley voltage space
(;; ), where acts on itself by right multiplication, gives rise to a regular covering.
Conversely, each covering of a connected base graph is associated with some voltage
space, and each regular covering is associated with a Cayley voltage space (;; ),
3 Action graphs
Let G be a (nontrivial) group acting (on the right) on a nonempty set Z and let
S G be a Cayley set, that is, ; S. With the triple (Z; G; S)
we naturally associate the action graph
G;
We shall actually need to consider
Cayley multisets, that is, S has repeated elements (where for each s 2 S the elements
s and s 1 have the same multiplicity). Our denition of a graph must then be
extended accordingly.
Example 3.1 The action graph of a group G, acting on a one-element set relative
to a Cayley (multi)set S, is called a monopole and denoted by mnp(S).
Example 3.2 The graph Act(HjG; G; S) is the Schreier coset graph Sch(G; H;S).
By taking we get the Cayley graph Cay(G; S) and a monopole,
respectively.
Example 3.3 Let be permutations of a nite set Z. By representing
each of these permutations pictorially in the obvious way we obtain the action graph
for the permutation group relative to the symmetrized generating
(multi)set f 1
g.
Example 3.4 A nite oriented map M is a nite graph, cellularly embedded into
a closed orientable surface endowed with a global orientation. Let D be the dart-
set of the embedded graph, L its dart-reversing involution, and R the local rotation
which cyclicaly permutes the darts in their natural order around vertices consistently
with the global orientation. In studying the combinatorial properties of such a map
we only need to consider the permutation group hR; Li on D (together with the
generating set fR; Lg), and consequently, its Schreier representation. See Jones and
Singerman [25]. Equivalently, we only need to consider the action graph of the group
hR; Li acting on D relative to the Cayley set fR; R 1 ; Lg. This graph, here denoted
by Map(D;R;L) or sometimes by Act(M ), is also known as the truncation of the
map [40].
More generally, maps on all compact surfaces can be viewed combinatorially in
terms of certain permutation groups and their generators, as shown by Bryant and
Singerman [10]. For closed surfaces, the associated action graphs correspond to
graph encoded maps of Lins [30].
A walk W in the action graph Act(Z; G; S) denes a word w(W
the alphabet S. Conversely, if w 2 S is a word, then W (z; w) denotes the (set of
starting at z 2 Z and determined by w. The fact that in the
case of repeated generators there is no bijective correspondence between words over
S and walks, rooted at a chosen vertex, is a minor technical di-culty which we
usually (but not always) can ignore. The action graph is connected if and only if
S generates a transitive subgroup of G. Without loss of generality we can in most
cases assume that the action is transitive and that the Cayley (multi)set generates
the group.
Morphisms arising from coverings of actions
A mapping
Z ~
is a morphism of action graphs
G; ~
only if
z (~s)); (1)
Zg is a collection of mappings ~
z (~s) and ( ~ z
This follows directly from the denition of a graph morphism. In particular, morphisms
arising naturally from coverings of actions can be viewed as graph covering
projections (by taking the Cayley (multi)sets to correspond bijectively in a natural
way). We state this formally as a theorem, and list some of the well-known special
cases and applications.
Theorem 4.1 Let (;
be a covering of actions and let ~
G be
a Cayley (multi)set. Consider
S) as a multiset in a bijective correspondence
with ~
S. Then the induced graph morphism p
G; ~
G; S) is a covering projection of action graphs.
Proof. The mapping
S satises (1) and (2). Hence
S is a graph
morphism. Since it is onto with qj ~
S a bijection, it is a covering projection.
Example 4.2 Let the action be transitive. Then the mapping G;
Sch(G; G b ; S), where c(b g, is an equivariant isomorphism.
Example 4.3 Let H H 0 G. Then
g, is an equivariant covering projection.
Example 4.4 Let K / G. Then is an invariant
1-fold covering projection, and hence an invariant isomorphism.
Example 4.5 Let (id;
G) be an invariant covering of actions.
Then Act(Z; G; S) is invariantly isomorphic to Act(Z;
G; q(S)). Thus, in studying
action graphs we may restrict to faithful actions by taking
similar result dealing with isomorphisms of Cayley digraphs can be found in [22].
Example 4.6 A homomorphism ~
of oriented maps is a morphism of the
underlying graphs which extends to a mapping between the supporting surfaces.
Topologically it corresponds to a branched covering with possible singularities in
face-centres, edge-centres and vertices. Combinatorially we have a mapping of the
respective dart-sets : ~
D such that (~x ~
This together with ~
R 7! R denes a covering of actions and consequently,
a covering projection of action graphs Map( ~
Example 4.7 Recall from Example 2.3 that transitive and faithful actions of quotient
groups of G can be \modeled by conjugacy classes of subgoups" in G. It follows
that there exists a covering projection
arising from actions (where only if G Q
b is contained in a conjugate
subgroup of G Q 0
b . A special case is essentially considered in [50]. The situation as
described above is encountered in the theory of maps and hypermaps.
Example 4.8 Oriented maps and their homomorphisms can be modeled by conjugacy
classes within triangle groups, see Jones and Singerman [25]. The idea extends
to all maps [10] and even hypermaps [27].
5 Structure-preserving morphisms
Nonisomorphic actions can give rise to isomorphic graphs, as shown by Examples 4.4,
4.5 and by Example 5.1 below. Also, isomorphic actions can have isomorphic graphs
with no graph isomorphism arising from an isomorphism of actions, see Example 5.2.
Example 5.1 The triangular prism is a Cayley graph for the groups S 3 and ZZ 6 . It
is also an action graph for the group S 4 , obtained by representing S 4 as the subgroup
of S 6 generated by permutations (12)(45)(36), (23)(56)(14) and (13)(46)(25). See
also Example 10.1.
Example 5.2 Take a Cayley graph Cay(G; S) where the generating set S is not a
CI-set. Then there is a generating Cayley set T with Cay(G; S)
that no automorphism of G maps S onto T .
In view of these remarks we note the following. When considering an action graph
as having a certain structure arising from the action, we are actually considering
the induced equivariant covering Act(Z; G;
G; ~
G; S) is structure-preserving if there exists a mapping of the
monopoles mnp( ~
together with mnp( ~
is a morphism of covering projections. In other words, does not depend on the
S. For example, coverings arising from coverings
of actions are structure-preserving, with mnp( ~
Proposition 5.3 Let
G; ~
G; S) be a structure-preserving
covering, where ~
S and S are generating (multi)sets and G is faithful.
Then this covering arises from a covering of actions.
Proof. By induction we have (~z ~
~
Z and any choice of generators ~ s
S. Let ~ s 1 ~
As
is onto and G faithful, we have (~s 1 Hence extends to a
homomorphism, as required.
Example 5.4 A covering projection of action graphs Map( ~
such that ~
R 7! R and ~
arises from a covering of actions h ~
hence represents a homomorphism of the respective maps.
Although coverings of action graphs, even isomorphisms, in general do not arise
from actions nor are at least structure-preserving, we may still ask the following.
Let Act( ~
G; ~
G; S) be covering projections. Is there an
action-structure for the graph X (respectively, ~
X) such that these projections arise
from coverings of actions?
Theorem 5.5 Let
G; ~
be a covering projection. Then ther
exists an action graph structure for X such that + is equivalent to a covering
arising from actions if and only if there exists a covering projection
which makes the following diagram
G; ~
y
y
commutative. In this case the action-structure for X can be chosen in such a way
that the respective covering is equivariant.
Proof(Sketch). If + is equivalent to a projection arising from a covering of
actions, then such a decomposition clearly exists. Conversely, let Z be the vertex-set
of X. One can show easily that z ~
g), ~ z 2 1 (z), is a well dened action
of ~
G on Z, with
G) an equivariant covering of actions. The
projection
G; ~
G; ~
S) is equivalent to + .
We can interpret Theorem 5.5 by saying that a quotient of a Schreier coset graph
decomposing the natural projection onto a monopole is again a Schreier coset graph
of the same group. Stated in this form, the result is due to Siran and
Skoviera [50].
We now turn to the second question above. We assume that (Z; G) is transitive
and that S is a generating Cayley (multi)set. We may also assume that the covering
projection
G; S) is given by means of a voltage
space G; S).
Theorem 5.6 With the notation and assumptions above there exists an action graph
structure for Cov(F; ; ) such that the natural projection
completes the diagram
G; S)
y
y
Moreover, if G is faithful then p is equivalent to a covering arising from actions.
Proof. The derived graph has Z F as the vertex-set and Z S F as the dart-
set, with the incidence function and the switching involution being, respectively,
By the unique walk lifting,
the collection of closed walks in Act(Z; G; S) representing the orbits of s 2 S lift to
a collection of closed walks representing a permutation of Z F which we denote
by s . This permutation is dened by (z; i)
is a bijection and that s
s .
Let ~
Sg. The action graph Act(Z F; ~
G; ~
has Z S as the vertex-set and Z F ~
S as the dart-set. The incidence is given by
beg(z; and the switching involution is inv(z;
The mappings id on vertices and (z; s; i) 7! (z; dene an isomorphism
G; ~
S). This induces a projection Act(Z F; ~
G; ~
G; S) equivalent to p , and the natural projection Act(ZF; ~
G; ~
induces an equivalent natural projection
S) making the required
diagram commutative.
The last statement in the theorem follows by Proposition 5.3.
Example 5.7 Let a connected graph Cov(F; ; ) be a covering of the action graph
Map(D;R;L) associated with a map M , and let Cov(F; ; ) inherit the action-
structure as in Theorem 5.6. Since L is an involution, Cov(F; ; ) is the action
graph Map(D F
M , and the covering projection essentially
arises from the map homomorphism ~
M !M , by Example 5.4.
This shows that homomorphisms of oriented maps can be studied just by considering
coverings of associated action graphs. Compare with the discussion on Schreier
representations of maps in [41]. We dene a map homomorphism ~
M to be
regular if Map( ~
Map(D;R;L) is a regular covering. See also [36].
Finally, let us brie
y consider the following question. What is the necessary and
su-cient condition for a connected graph as in Theorem 5.6 to be the
Cayley graph of the group ~
First of all, s 1
G (z;i) if and only if s 1
F for each z 2 Z. Thus, assuming that G is faithful and the covering is regular, the
answer to the above question is the following: any closed walk with trivial voltage
must correspond to a relation s 1 and in this case, all closed walks corresponding
to this word must have trivial voltage. In particular, a regular covering
of a Cayley graph as in Theorem 5.6 is the required Cayley graph if and only if
the following holds: whenever a closed walk has trivial voltage then all closed walks
corresponding to the respective word must have trivial voltage. This condition is
equivalent to saying that the equivariant group lifts, see Section 8 and [35].
Example 5.8 Let ~
M !M be a regular homomorphism of oriented maps, where M
is a regular map (see Section 7). Since the action graph Act( ~
M ) can be reconstructed
from the action graph Act(M) as in Theorem 5.6 (see Theorem 6.1), it follows that
~
M is a regular map if and only if AutM lifts (see Section 7 and Example 8.9, and
also [36]).
6 Reconstruction
Let
be a covering of transitive actions, let ~
G be a generating
Cayley (multi)set and
S) a (multi)set in bijective correspondence with
~
S. We would like to reconstruct the action graph Act( ~
G; ~
S) from Act(Z; G; S) in
terms of voltages.
This can be done by means of a canonical voltage space ( ~
G; ) (relative to
(; q)) on Act(Z; G; S), with voltages on darts dened by the rule
The derived covering graph Cov( ~
G; ) has vertex-set ~
Z and the dart-set
~
Z. The incidence function beg : ~
V is given by the projection
beg(z; s; ~
z), and the switching involution is inv(z; s; ~
(s). The corresponding local group is Its action on
~
Z is, modulo relabeling, the same as the action of W b on the vertex bre over b in
G; ). But W b and q 1 (G b ) also act on the bre b b in Act( ~
G; ~
S). In fact,
is an invariant covering of actions.
Theorem 6.1 With the notation above, the component C(b; ~ b) of Cov( ~
G;
the vertex (b; ~ b), consists exactly of all vertices of the form (z; ~ z),
(~z), and the restriction G; S) is equivalent to
G; ~
G; S). (If ~
G is a permutation group, then the action graph
structure imposed on C(b; ~ b) as in Theorem 5.6 coincides with Act( ~
G; ~
S).)
Next, all restrictions of
G; G; S) to its connected components
are equivalent to
G; ~
G; S) if and only if the
restrictions of the action of q 1 (G b ) on all its orbits have the same conjugacy class
of stabilizers. In particular, let the action of ~
G be such that no stabilizer is properly
contained in another stabilizer (say, the group is nite). Then all restrictions of p
to its connected components are equivalent to p ;q if and only if q( ~
Proof. Let (z; ~
u) be in the same component as (b; ~ b),
vertices in this component
are of the required form. If (z; ~
z) and (u; ~ u) have the same label ~
Hence no two vertices are labeled by the same label. Moreover,
all labels from ~
Z actually appear since ~
S generates ~
G and ~
G is transitive on ~
Z.
The bre over b in C(b; ~ b) is labelled by the orbit of Loc b on ~
Z which is precisely
It follows that the actions of W b on bres over b in C(b; ~ b) and
G; ~
are essentially the same. Hence G; S) and
G; ~
G; S) are equivalent. The explicit graph isomorphism which
establishes this equivalence is (z; ~
z) 7! ~ z on vertices and (z; s; ~
z) 7! (~z; ~ s) on darts. If
~
G is faithful, then the action graph structure imposed on C(b; ~ b) as in Theorem 5.6
obviously coincides with Act( ~
G; ~
S).
Clearly, all restrictions of p to the components of Cov( ~
G; ) are equivalent if
and only if the induced actions of Loc its orbits in ~
Z are equivari-
antly isomorphic, that is, all of these actions must have the same conjugacy class of
stabilizers in Loc b .
Observe that Loc b
G ~ z for ~
~
G ~
Now, if all stabilizers of ~
G are contained in Loc b , then all the above actions of
do have the same conjugacy class of stabilizers. Conversely, the fact that
each
~
G ~
is also the stabilizer of some point ~ z 2 1 (b) implies
~
G ~ z ~
G ~ u . Since the action of ~
G is not pathological, we have equality, and hence each
stabilizer of ~
G must be contained in Loc This condition can be further
rephrased as follows. Since q is onto and ~
G ~ z , ~ z 2 ~
Z are conjugate subgroups, we
have
G. Thus, q( ~
Conversely, if q( ~
then q( ~
G ~
. This completes the proof.
Example 6.2 The action graph Act( ~
G; ~
S) can be reconstructed from Act(Z; G; S)
by taking just any connected component of the derived covering, for instance, when
~
G (or G) acts with a normal stabilizer. A special case is encountered when reconstructing
Cayley graphs form Cayley graphs of quotient groups [19].
Example 6.3 Consider the dihedral group
1i with subgroups
Then Sch(G; H;S) equivariantly covers Sch(G; H 0 ; S). Since not all conjugates of
H, the graph Sch(G; H;S) cannot be reconstructed by taking just any
connected component of the derived covering Cov(HjG; G; ).
In this particular case we can also apply the Burnside-Frobenius counting lemma.
Namely, the action of H 0 on the cosets of H has two orbits, whereas the derived
covering is 9-fold.
7 Automorphisms
We henceforth assume that actions are transitive and that Cayley (multi)sets generate
the groups in question. A bijective self-mapping + of Z +ZS is an automorphism
of Act(Z; G; S) if and only if (z;
a collection of bijective self-mappings of S satisfying (z
Studying general automorphisms, even the subgroup of
structure-preserving ones, is di-cult. Next in the line is the subgroup Aut S (Z; G) of
extends to (or just is) an automorphism
of G. Proposition 5.3 implies Proposition 7.1.
Proposition 7.1 Let a transitive action (Z; G) be faithful, and let S G be a
generating Cayley (multi)set. Then each automorphism of Act(Z; G; S) of the form
is an action-automorphism.
Let Aut S denote the subgroup of S-automorphisms
of G, and let Adm S
its S-admissible subgroup formed
by automomorphisms which preserve S as well as the conjugacy class of stabilizers
of G (as a set). Of course, + is an action-automorphism of Act(Z; G; S) if and
only if (;
Z G. In particular, we can identify Eq(Z)G ,
the equivariant group of automorphisms of Act(Z; G; S), and the equivariant group
of the action, Aut(Z)G . It is easy to see that the projection Aut S (Z;
is a group epimorphism with kernel Aut(Z)G . We state this observation
formally.
Theorem 7.2 The group Aut S (Z; G) of action-automorphisms of Act(Z; G; S) is
isomorphic to an extension of the equivariant group Aut(Z)G by Adm S
Z G.
While Aut(Z)G is isomorphic to NG (G b )=G b , the identication of Adm S
Z G and
the extension itself might not be easy. Here is a simple and well-known example.
Example 7.3 In view of Proposition 7.1 there are two kinds of automorphism of
Cayley graphs. Those for which the mapping of darts is not constant at all points,
and those which are action-automorphisms. Let + be an action-automorphism.
a (g), for some a 2 G and 2 Adm S
G.
Moreover, the assignment (a; is an isomorphism G
Aut S (G; G) r . Hence the group of action-automorphisms Aut S (G; G) r of Cay(G; S)
is isomorphic to a subgroup in the holomorph of G.
The group Aut S (G; G) r can as well be characterized as the normalizer of the left
regular representation f a;id j a 2 Gg of G within the full automorphism group of
Cay(G; S) [18].
Example 7.4 From the very denition it follows that the automorphism group
AutM of an oriented map M can be identied with the equivariant group Aut(D) hR;Li
[25].
Example 7.5 An oriented map is called regular if its automorphism group acts
transitively (and hence regularly) on the dart-set of the underlying graph. Recall
that the equivariant group acts regularly if and only if the group itself is regu-
lar. Therefore, a map is regular if and only if Map(D;R;L) is a Cayley graph
for the group hR; Li [25, 40]. Since any Map(D;R;L) is equivariantly covered by
every map is a (branched) regular quotient of some regular
map [25, 26, 46]. The idea extends to maps on bordered surfaces [5].
As a last remark, let us ask what are the necessary and su-cient conditions
for a transitive faithful action (Z; G) to extend to a group of automorphisms of
G; S). This problem is of interest [45] (however, action graphs in [45] dier
from ours), and di-cult in general. But let us assume that the induced automorphism
group preserves the natural structure as a covering of mnp(S). Then the
answer is trivial.
Proposition 7.6 Let the action (Z; G) be transitive and faithful, and let S G
be a generating Cayley (multi)set. Then T
g, is a group of (action) automorphisms of Act(Z; G; S) if and only if
S is a union of conjugacy classes. In particular, T (G) acts as a subgroup of the
equivariant group if and only if the action graph is a Cayley graph Cay(G; S) and G
is abelian.
Other types of symmetries of action graphs will not be discussed here. Arc-
transitivity of Cayley digraphs is considered, for instance, in [2, 22].
8 Lifting and projecting automorphisms
G; ~
G; S) be a covering arising from transitive actions,
where ~
S and
are generating Cayley (multi)sets. The problem of lifting
automorphisms has recently obtained considerable attention [4, 5, 6, 9, 15, 23, 24,
in various contexts. From the general theory we infer that
the lifting condition, expressed in terms of the canonical voltages valued in ~
G, reads:
An automorphism lifts along p ;q if and only if there exists ~ z 2 b b such that,
for each W 2 W b ,
G ~ b if and only if W 2 ~
G ~
However, we here explore the possibility that the lifting condition be expressed
without the usual explicit reference to mappings of closed walks and their voltages,
but, rather, expressed in terms of certain subgroups of G and ~
G. To this end we
have, in spite of the fact that coverings arising from actions are somewhat peculiar,
restrict our considerations either to a very special class of action-automorphisms
(the equivariant group), or else to action-automorphisms with addtional requirements
imposed on the covering of actions (Example 8.2). We also note that the
lifts of action-automorphisms, although structure-preserving, need not be action-
automorphisms.
Example 8.1 Let ~
G be faithful. Then, by Proposition 7.1, a lift of an action-
automorphism of Act(Z; G; S) is an action-automorphism of Act( ~
G; ~
S),.
Example 8.2 Suppose that a covering of actions is equivariant. Then the conclusion
as in the previous example holds, too. Moreover, from (3) we easily derive that
an action-automorphism lifts if and only if there exists g 2 G such that
g. (An alternative direct proof avoiding (3) is readily
at hand and is left to the reader.) In particular, action-automorphisms do lift along
the equivariant covering Cay(G; G; S).
We now focus our attention on the equivariant group Eq(Z)G of Aut(Z; G; S).
That one can derive a reasonable lifting condition in terms of subgroups of ~
G and G
is not surprising because with equivariant automorphisms we could as well consider
just coverings of actions.
Theorem 8.3 The equivariant group Eq(Z)G lifts along
G; ~
G; S) if and only if q(N ~
G ~ b )) intersects every coset of G b within NG (G b ),
and the lifted group is then a subgroup of the equivariant group Eq( ~
G . In particu-
lar, if the covering projection is regular, then Eq(Z)G lifts if and only if NG (G b )
q(N ~
Proof. We present a proof which avoids the reference to (3). Clearly, a lift of
an equivariant automorphism is equivariant. Consider a pair of equivariant automorphisms
of the covering graph and of the base graph, respectively. Their action
on vertices is given, relative to ~ b and
G ~ b , and c (b
be a lift of c + c id, that is, let ~ ~
We easily get q(~c) 2 G b c. Conversely,
if c and ~ c satisfy this condition, then ~ ~ c
~ c id is a lift of c id. The lifting
condition can now be expressed as: for each c 2 NG (G b ) there exists ~ c 2 N ~
such that q(~c) 2 G b c. The claim follows.
The covering projection p ;q is regular if and only q 1
G ~ b ), by Theorem
9.1 below. This implies G b q(N ~
G ~ b )), and hence the lifting condition now
obviously reduces to NG (G b ) q(N ~
G ~ b )). The alternative form follows because
contains Ker q.
Example 8.4 Let the group ~
G act with a normal stabilizer. Then the equivariant
group lifts. In particular, Eq(Z)G lifts along
G; ~
G; S).
Example 8.5 Let ~
M !M be a homomorphism of oriented maps. If ~
M is a regular
map, then AutM lifts [36].
Example 8.6 The lift of the equivariant group Eq(Z)G along a regular covering
projection p ;q is isomorphic to q 1 (NG (G b ))= ~
G ~ b .
Proposition 8.7 Let the covering
G; ~
Cay(G; S) be regular. If
the equivariant group of Cay(G; S) lifts, then Act( ~
G; ~
S) is isomorphic to the Cayley
graph Cay( ~
G ~
S).
Proof. By Theorem 8.3 we have q(N ~
G. Thus N ~
coset of Ker q. Since Ker q q 1
Theorem 9.1
below), N ~
contains Ker q. Hence N ~
G, and the proof follows.
Example 8.8 Consider a regular covering
G; ~
~
G is faithful. Then Act( ~
G; ~
S) is isomorphic to the Cayley graph Cay( ~
G; ~
S) if and
only if the equivariant group of Cay(G; S) lifts, by Example 8.4 and Proposition 8.7.
In view of the lifting condition (3) we may rephrase this as in Section 5.
Example 8.9 Let ~
M be a regular homomorphism, where M is a regular
map. If AutM lifts, then ~
M is also a regular map. In view of Example 8.5 we
obtain the if and only if statement of Example 5.8. See also [20, 36].
Let us now consider projecting automorphisms of Act( ~
G; ~
along p ;q . An
automorphism ~
projects whenever each vertex-bre is mapped onto some vertex-
bre and each dart-bre is mapped onto some dart-bre. If the covering is regular,
then an automorphism projects if and only if it normalizes the group of covering
transformations. This is actually a theorem of Macbeath [37] which holds for general
topological coverings as well as in our combinatorial context. A similar result for
digraphs is proved in [39].
Note that projections of action-automorphisms are structure-preserving but need
not be action-automorphisms. One particular instance when such projections are
indeed action-automorphisms is when the covering is equivariant. The case when G
acts faithfully is another. In general, the following holds.
Proposition 8.10 Suppose that an action-automorphism ~
~
of Act( ~
G; ~
projects along
G; ~
G; S). Then the projected automorphism
is an action-automorphism of Act(Z; G; S) if and only Ker q is invariant for ~
.
Proof. Let be the projected automorphism. We know that is dened
q 1 on S. Now extends to an automorphism of G if and
only if Equivalently, we must have
Ker q, that is, ~
Theorem 8.11 An action-automorphism ~
of Act( ~
G; ~
projects along
G; ~
G; S) if and only if there exists ~
G such that
~
and ~
Proof. First of all, if an action-automorphism is vertex-bre preserving, then
it is also dart-bre preserving. Moreover, it is enough to require that just one
vertex-bre is mapped to a bre. The proof of this fact is left to the reader. It
follows that ~
~
projects if and only if ~
maps
Writing ~ ~
g and taking into account that the stabilizers are conjugate
subgroups, we obtain the desired result.
Example 8.12 An action-automorphism ~
of Act( ~
G; ~
projects along
G; ~
only if ~
(Ker Ker q, and the projection
is necessarily an action-automorphism.
Theorem 8.13 The group Eq( ~
G projects along
G; ~
G; S)
if and only if N ~
(equivalently, q(N ~
)). The projected
group is a subgroup of Eq(Z)G . For regular coverings, the condition simplies
to
Proof. Clearly, the projection of an equivariant automorphism is equivariant. By
Theorem 8.11, an automorphism ~
c id, where ~
c~g and ~ c 2
G ~ b , projects if and only if there exists ~ g 2 ~
G such that ~ b ~ and
~
This is equivalent to saying that N ~
intersects the coset
~
should intersect every coset of
~
G ~ b within N ~
implying that N ~
should contain N ~
claimed. The alternative form is also evident. direct
proof similar to the proof of Theorem 8.3 is left to the reader.) The rest follows by
Theorem 9.1.
Example 8.14 Let G act with a normal stabilizer. Then the group Eq( ~
G projects.
In particular, it projects along
G; ~
Example 8.15 Let ~
M be a homomorphism of oriented maps, where M is a
regular map. Then Aut ~
projects [36].
Corollary 8.16 Let the group Eq( ~
G project along
G; ~
G; S).
Then Act(Z; G; S) is isomorphic to the Cayley graph Cay(G=GZ ; S).
Proof. By Theorem 8.13 we have
G. Hence G b is normal in G, and the
proof follows.
Example 8.17 Let ~
M be a homomorphism of oriented maps, where ~
M is a
regular map. In view of Example 8.15 and Corollary 8.16 the group Aut ~
projects
if and only if M is also a regular map. (The homomorphism itself must then be
regular, see Example 9.3.)
Corollary 8.18 Let the covering projection
G; ~
G; S) be
regular. Then the equivariant group Eq(Z)G lifts and the equivariant group Eq( ~
G
projects if and only if q(N ~
In this case, Eq(Z)G lifts to Eq( ~
G
and Eq( ~
G projects onto Eq(Z)G .
Proof. By Theorems 8.3 and 8.13 we must have
q(N ~
and the claim follows. The last statement is evident as
well.
Example 8.19 The projection
G; ~
Cay(G; S) is regular (see Example
9.3). The left regular representation of G lifts to the left regular representation
of ~
G, and hence the latter projects onto the former. In particular, if ~
a homomorphism of regular maps, then Aut M lifts to Aut ~
M and Aut ~
projects
9 The structure of lifted groups
Theorem 9.1 Let a covering projection
G; ~
G; S)
arise from a covering of transitive actions, where ~
S and
are generating
Cayley (multi)sets. Choose ~ b 2 ~
Z and base-points. Then:
(a)
is a subgroup of Eq( ~
G .
G ~ b g is
isomorphic to (q 1
G ~ b .
(c) The covering projection p is
G ~ b ]-fold, and is regular if and only if
~
Proof. The statement (a) is obvious. Since ~
G is transitive on ~
Z we can explicitly
calculate the elements of Aut( ~
G relative to ~ b as ~ ( ~ b ~
G ~ b . Now b.
Consequently, ~ c 2 q 1 (G b ), giving (b).
The rst statement of (c) follows from the fact that the covering is connected.
Indeed, the group q 1 acts transitively on the bre 1 (b) and has ~
G ~ b as its
stabilizer. The covering is regular, by denition, if CT(p) is transitive on 1
)g. But this holds if and only if q 1
part (c) follows.
Example 9.2 Let H H 0 G, and let
be the corresponding covering projection, where g. Then
is isomorphic to (H 0 \N(H))=H.
The covering projection is [H and is regular if and only if H / H 0 .
Example 9.3 A covering
G; ~
G; S) is always regular. Hence
a homomorphism ~
of oriented maps, where ~
M is a regular map, must be
a regular homomorphism. In particular, homomorphisms between regular maps are
regular [36].
A lifted group of automorphisms is an extension of the group of covering transformations
by the respective group of the base graph. This extension is di-cult to
analyze in general [9, 20, 33, 35, 36, 51]. We end this section by considering the
case when the equivariant group lifts as a split extension along a covering projection
arising from actions. As the equivariant group acts without xed points, each orbit
of an arbitrary complement to CT(p) within the lifted group intersects each bre
in at most one point (thus forming an invariant transversal). This is equivalent
with the requirement that the covering projection be reconstructed by means of a
voltage space for which the distribution of voltages is well behaved relative to the
action of the equivariant group. The claim takes a particularly nice form whenever
the covering projection is regular [33, 35]. A straightforward application of these
considerations to regular homomorphisms of oriented maps gives Theorem 9.4. A
direct proof in terms of voltages associated with angles of the map can be found in
[36]. By
we denote the set of all walks with endvertices in a subset of vertices
Theorem 9.4 Let ~
M be a regular homomorphism of oriented maps, and let
be an orbit (or a union of orbits) of a dart in M relative to AutM . Then AutM
lifts as a spit extension of CT (the lift of the identity automorphism) if and only
if the covering projection of action graphs Act( ~
can be reconstructed
by means of a Cayley voltage space (CT; CT; ) such that the set of walks fW 2
in Act(M) is invariant for the action of AutM . Moreover, the
extension is a direct product if and only if Act( ~
can be reconstructed
by a Cayley voltage space (CT; CT; ) such that each of the sets fW 2
is invariant for the action of AutM .
Proof. The map automorphism group corresponds to the equivariant group in the
associated action graph, and the lift of the identity corresponds to the group of
covering transformations for the regular covering projection of the respective action
graphs. The theorem follows by applying Theorems 9.1 and 9.3, and Corollaries 9.7
and 9.8 of [35].
Generators and relations
a nonempty antisymmetric subset, that is, ! S \
is either
empty or else all of its elements are of order 2. With (Z; G) and ! S we associate
the action digraph Act(Z; G; with the vertex-set Z and the arc-set Z ! S , where
in the case of action graphs it is sometimes
necessary to consider the set ! S as a multiset. The underlying graph of Act(Z; G;
is the action graph act(Z; G; ~
G;
). (Note that involutory loops
collapse to semiedges.) Omitting formal basic denitions we only mention that
epimorphisms of actions give rise to covering projections of action digraphs.
The study of group presentations involves a variety of techniques, see [13, 31, 32,
48] and the references therein. Action digraphs can often provide much insight to a
formal algorithmic approach, and were (in disguise) at least partially present in the
original works of Reidemeister and Schreier.
Choose a spanning tree in Act(Z; G; ~
S), where (Z; G) is transitive and ~
generates
G. Each cotree arc gives rise to a unique fundamental closed walk based at z 2 Z,
and the set of all such walks generates the set of all closed walks at z, up to reduction.
Thus, if ~
C is a set of labels bijectively associated with all the cotree arcs, which
evaluate to words in ( ~
dened by the fundamental closed walks rooted at
z 2 Z, then each element of G z , expressed as a word in ( ~
can be written
as a word in ( ~
. This is done by trailing the closed walk associated with a
given word in ( ~
simultaneously keeping track of the labels in ~
when traversing a cotree arc. The process is known as the rewriting process relative
to z 2 Z. A variant of the Schreier-Reidemeister theorem now states the following.
Ri be a presentation of G, and let RewR be the set of (reduced) words
in ( ~
obtained from all the relators in R by a rewriting process relative
to all z 2 Z. Then the stabilizer G b has the presentation h ~
C; RewRi. Many of
the generators and relators obtained by this method can be redundant. However,
sophisticated techniques for simplifying the presentation do exist in certain cases
[12, 13, 32, 48, 52].
An action digraph Act(Z; G; ~
S), where ~ S generates G, obviously determines the
group G=GZ up to isomorphism (also if G is not transitive). Suppose that the
action digraph is nite. Denote by ~
1 the generators of the stabilizer G b 1
, expressed
as words in (
associated with fundamental closed walks at b 1 relative
to a spanning tree in the appropriate component of Act(Z; G; ~
S). By repeating
this process on Act(Znfb 1
on, we can
recursively construct a generating set for the pointwise stabilizer GZ . Hence if G is
faithful, we can nd a presentation of G. If j ~
then the number
of generators of GZ obtained in this way can amount up to (n 1)m!+ 1. Thus, the
method is not practical unless one can detect enough many redundant generators at
each step, or has su-cient control over the recursive construction of the generators.
As for the improvements which allow eective computer implementation we refer to
[11, 12, 48] and the references therein.
Example 10.1 We leave to the reader to check that the following three permutations
in the symmetric
a subgroup which is isomorphic to S 4 .
Despite the remarks above, Act(Z; G; ~
proves useful in gathering at least partial
information about the dening relations, particularly when its underlying graph
is highly asymmetric with special structure. The idea is to use graph-theoretical
properties of Act(Z; G; ~
S) to derive such information. The following example is
taken from [34].
Example 10.2 Consider the alternating group A n , where n 11 is odd, and the
generators
analysis of the action digraph shows that in the Cayley
graph the cycles of girth-length, which is 6, arise essentially
from the relation (ab 1 that cycles of length n arise essentially from the
obvious relations a This information is crucial in proving that the above
Cayley graph is 1/2-transitive.
What we have discussed so far can be applied to a problem encountered with lifts
of automorphisms. Let be a covering projection of connected
graphs (or even more general topological spaces, see [33]), given by means of a voltage
space acts faithfully on F . A necessary condition (also su-cient
if the covering is regular) for an automorphism to lift is that the set of all closed
paths with trivial voltage be invariant under its action [33, 35]. In order to test
this eectively (assuming of course, that the covering has nite number of folds and
that the fundamental group of X is nitely generated) we need the generators of the
kernel of : b ! , expressed in terms of a generating set ~
S of b , save for those
cases where ad-hoc techniques apply. One possibility is to consider the auxiliary
regular covering X. The required generators of Ker are
then obtained by projecting the generators of ( ~ b; Cov), where ~ b 2 b b . However,
this requires the construction of Cov(;; ), which is not always appropriate.
A better alternative is to consider the fundamental group b acting on by right
multiplication (). The stabilizer of this action is Ker , and so the required
generators can be found by means of a spanning tree in Act(; b ; ~
S)).
It is close to rst constructing the coset representatives of Ker , that is, nding a
closed path (rooted at the based point) with voltage , for all 2 , and then
applying the Schreier method. Another possibility is to consider the action of b
on the abstract bre F given by i (). The kernel Ker is then equal to
the pointwise stabilizer of this action. Thus, the required generators can be found
recursively by considering the action digraph Act(F; b ; ~ S).
A similar problem is to construct a generating set for the trivial voltage paths
with endpoints in an orbit of a given group A of automorphisms of X. Namely, a
necessary and su-cient condition for A to lift along a regular covering projection as
a special kind of split extension of CT(p) (one with an invariant transversal [33, 35])
is that the set of paths as above be invariant for the action of A (recall Theorem 9.4).
Suppose that X is a graph
and
a vertex orbit. Introduce a new vertex B not in X,
and connect B with all the vertices in
Moreover, extend the voltage assignment
so that these new edges carry the trivial voltage. The required generating set of
walks is obtained in the same way as before by considering the extended graph with
B as the base point. Note that the group A has the required type of lift if and
only if, viewed as the stabilizer of B in the extended graph, lifts along the extended
covering projection. (The idea clearly extends to nite CW-complexes.)
The preceding discussion is summarized in the following theorem.
Theorem 10.3 With notation and assumptions above, the problem whether a given
group A of automorphisms of a graph X (or a more general topological space) lifts
along a regular covering projection given by a faithful voltage space can be
tested in time, proportional to the number of generators of A, multiplied by the time
required for the construction of Cay(; ( ~
and its spanning tree (or multiplied by
the time required for the construction of Act(F; b ; ~
S) and nding the generators of
the pointwise stabilizer).
If X is a graph (or a nite CW-complex), a similar statement holds for the
problem whether a given group of automorphisms of X lifts as a split extension of
CT(p) with an invariant transversal.
--R
On group graphs and their fault tolerance
A. group theoretic model for symmetric
Group action graphs and parallel architectures
Homological coverings of graphs
Some applications of graph contractions
Conjugacy graphs with an application to imbedding metric graphs
Foundations of the theory of maps on surfaces with boundary
Cambridge University Press
Construction of de
Generators and relations for discrete groups
Isomorphism classes of concrete graphs coverings
Routage uniformes dans les graphes sommet-transitifs
On the full automorphism group of a graph
Graph homomorphisms: structure and symmetry
Isomorphisms and automorphisms of coverings
Graph covering projections arising from
Theory of maps on orientable surfaces
Theory Ser.
Group actions
Isomorphism classes of graph bundles
Theory Ser.
Combinatorial group theory: Presentations of groups in terms of generators and relations
On a theorem of Hurwitz
Automorphisms of groups and isomorphisms of Cayley digraphs
Vega version 0.2 quick reference manual and Vega graph gallery
Automorphism groups of covering graphs
Computation with
Graph coverings and group liftings
Surfaces and planar discontinuous groups
--TR
Topological graph theory
On group graphs and their fault tolerance
A Group-Theoretic Model for Symmetric Interconnection Networks
Group action graphs and parallel architectures
Lifting map automorphisms and MacBeath''s theorem
Isomorphisms and automorphisms of graph coverings
Which generalized Petersen graphs are Cayley graphs?
Graph covering projections arising from finite vector spaces over finite fields
Automorphism groups of covering graphs
Group actions, coverings and lifts of automorphisms
Isomorphism Classes of Concrete Graph Coverings
Constructing 4-valent <inline-equation> <f> <fr><nu>1</nu><de>2</de></fr></f> </inline-equation>-transitive graphs with a nonsolvable automorphism group
Lifting graph automorphisms by voltage assignments
Strongly adjacency-transitive graphs and uniquely shift-transitive graphs
--CTR
Tomaz Pisanski , Thomas W. Tucker , Boris Zgrabli, Strongly adjacency-transitive graphs and uniquely shift-transitive graphs, Discrete Mathematics, v.244 n.1-3, p.389-398, 6 February 2002
Aleksander Malni , Roman Nedela , Martin koviera, Regular homomorphisms and regular maps, European Journal of Combinatorics, v.23 n.4, p.449-461, May 2002 | regular map;voltage group;cayley graph;lifting automorphisms;covering projection;action graph;schreier graph;group action;group presentation |
606529 | Letter graphs and well-quasi-order by induced subgraphs. | Given a word w over a finite alphabet and a set of ordered pairs of letters which define adjacencies, we construct a graph which we call the letter graph of w. The lettericity of a graph G is the least size of the alphabet permitting to obtain G as a letter graph. The set of 2-letter graphs consists of threshold graphs, unbounded-interval graphs, and their complements. We determine the lettericity of cycles and bound the lettericity of paths to an interval of length one. We show that the class of k-letter graphs is well-quasi-ordered by the induced subgraph relation, and that it has a finite set of minimal forbidden induced subgraphs. As a consequence, k-letter graphs can be recognized in polynomial time for any fixed k. | Introduction
In graph theory, a reflexive and transitive relation is called a quasi-order. A quasi-order -
on X is a well-quasi-order if for any infinite sequence a 1 ; a there are indices
such that a i - a j . Equivalently, X contains no infinite strictly decreasing sequences and no
infinite antichains. Yet another equivalent characterization of well-quasi-orders is that every
nonempty subset of X has a nonzero finite number of minimal elements (cf. [9, 12]).
By the famous Graph Minor Theorem of N. Robertson and P. D. Seymour, the graph
minor relation is a well-quasi-order on the class of all graphs. This, however, is not true for
the more restrictive relations such as the topological minor (or homeomorphic embeddabil-
ity), the subgraph, and the induced subgraph relations. It is therefore of interest to identify
restricted classes of graphs which are well-quasi-ordered by these relations. For example,
the class of all trees is well-quasi-ordered by the topological minor relation, according to a
well-known theorem of J. B. Kruskal [11]. G. Ding has proved that a subgraph ideal (i.e., a
class of graphs closed under taking subgraphs) is well-quasi-ordered by the subgraph relation
if and only if it contains at most finitely many graphs C n and F n (C n being the cycle on
vertices, and F n the path on n vertices with two pendant edges attached to each of its
endpoints).
Concerning the induced subgraph relation - i that we shall consider here, the following
is known. P. Damaschke [3] has proved that P 4 -reducible graphs (i.e., graphs in which all
induced paths on four vertices are vertex-disjoint) are well-quasi-ordered by - i . G. Ding has
proved that the following classes of graphs are well-quasi-ordered by
the class of graphs G such that for some R ' V (G) with jRj - r, the graph G \Gamma R
has matroidal number at most three [6],
ffl any subgraph ideal which is well-quasi-ordered by the subgraph relation [5].
In [3] and [5], several further classes of graphs defined by excluding a finite set of forbidden
induced subgraphs have been shown well-quasi-ordered by - i .
In this paper we present another family of induced-subgraph ideals which are well-quasi-
ordered by - i . Given a word w over a finite alphabet and a set of ordered pairs of letters
which define adjacencies, we construct a graph which we call the letter graph of w. The
lettericity of a graph G is the least size of alphabet permitting to obtain G as a letter graph.
In Section 3 we state some basic properties of k-letter graphs. The class of 2-letter graphs is
described completely in Section 4: it is composed of threshold graphs, unbounded-interval
graphs, and their complements. In Section 5 we determine the lettericity of cycles and paths
(the latter only to within an interval of length one) and show that for large n there are
n-vertex graphs whose lettericity exceeds 0:707 n. In Section 6 we show that the class of
k-letter graphs is well-quasi-ordered by - i and has a finite set of minimal forbidden induced
subgraphs. As a consequence, for any fixed k the class of k-letter graphs can be recognized
in polynomial time.
Definitions and notation
Our graphs are undirected and simple. We write x -G y if x and y are adjacent vertices of
G. As a set of pairs, the adjacency relation in V (G) is denoted by AdjG . The complement
of a graph G is denoted by G. If A is a set of graphs we write A for the set fG ; G 2 Ag.
The disjoint union of G 1 and G 2 is denoted by G 1 and the disjoint union of n copies
of G is denoted by nG. As usual, K n denotes the complete graph on n vertices, K p;q the
complete bipartite graph on p + q vertices, P n the path on n vertices, and C n the cycle of
length n. The vertex set of P n is
The vertex set of C n is f0;
If A is a set of graphs closed under taking induced subgraphs we denote by Obs(A) the set
of obstructions or minimal forbidden induced subgraphs for A (i.e., the minimal elements of
the complement of A quasi-ordered by the induced subgraph relation). The isomorphism
relation among graphs is denoted by - =. By z(G) we denote the cochromatic number of G,
which is the minimum cardinality of a partition of V (G) into subsets that are either a clique
or an independent set.
Let \Sigma be a finite alphabet and \Sigma the set of all words over \Sigma (i.e., the free monoid
generated by \Sigma under concatenation). For a word
its reverse. If A is a set of words we write A R for Ag.
Let P ' \Sigma 2 be a fixed set of ordered pairs of symbols from \Sigma. To each word
its letter graph G(P; w) in the following way:
The vertices of G(P; w) are naturally labelled with the symbols of w.
Example 1 Take abcabc. The corresponding
letter graph G(P; w) is shown in Fig. 1 where vertex i is labelled with s i . In this case, G(P; w)
is the 6-cycle C 6 .
a b c a b c
Figure
1: C 6 as a 3-letter graph
Denote
G \Sigma (P);
Thus G k is the set of all graphs that are letter graphs over some alphabet of size k, and l(G)
is the least alphabet size that suffices to represent G as a letter graph. The graphs from G k
will be called k-letter graphs, and l(G) the lettericity of G. Example 1 shows that l(C 6 ) - 3.
3 Some properties of k-letter graphs
First we restate the definition of k-letter graphs in purely graph-theoretic terms.
Proposition 1 A graph G is a k-letter graph if and only if
1. there is a partition such that each V i is either a clique
or an independent set in G, and
2. there is a linear ordering L of V (G) such that for each pair of indices 1 -
the intersection of AdjG with V i \Theta V j is one of
(a)
(d) ;.
Proof: If G is a k-letter graph then
be the different symbols from \Sigma that actually appear in w. Define
If a i a clique, otherwise it is an independent set. Let L be the order induced
on the vertices of V (G) by the linear ordering of their labels in w, and 1 - i
distinguish four cases:
(a) a i a
only if xLy, so AdjG "
(b) a i a
(c) a i a In this case x - y for all x
(d) a i a
2 P: In this case x 6- y for all x
Conversely, let G be a graph on n vertices which satisfies conditions 1 and 2. Take
Number the vertices of G so that v 1
We claim that the mapping v i 7! i is an isomorphism from G to
First assume that x -G y. If must
be a clique in G, so a i a hence l -H m. If i 6= j we distinguish four cases
corresponding to those in condition 2:
In this case a i a j 2 P. As x L y, we have l ! m and
hence l -H m.
In this case a j a i 2 P. As x L \Gamma1 y, we have l ? m
and hence l -H m.
In this case a i a
This case is impossible because by assumption, (x; y) 2 AdjG "
Now assume that l -H m and, w.l.o.g., that l ! m. Then a i a y. If
then is a clique in G, so x -G y. If i 6= j we distinguish three cases corresponding
to those in the definition of P:
As x L y, it follows that x -G y.
As y L \Gamma1 x, it follows that x -G y.
In this case x -G y for all x
Corollary 1 Let G be a k-letter graph. Then V (G) can be partitioned into p - k sets
each of which is either a clique or an independent set in G, such that for each
pair of indices 1 - the family of neighborhoods N j
all a chain of subsets of V j .
Proof: Let L be the linear order on V (G) described in Proposition 1. Pick x; y
that x L y. If AdjG "
then z L x, so z L y and y -G z, hence N j (x) ' N j (y). If AdjG "
In all four cases, one
of N j (x), N j (y) is a subset of the other. 2
Next we list some simple observations without proof. be a bijection,
extended to \Sigma
1 as a homomorphism.
Proposition 2 (i) G(f(P);
(ii)
Corollary 2 (i) G \Sigma 2
(P).
Proposition 3 (i) If
only if G
If z is a (not necessarily contiguous) subword of w then G(P; z) is an induced subgraph
of G(P; w). Hence the set G \Sigma (P) is closed under taking induced subgraphs, and therefore
has a characterization with forbidden induced subgraphs. The same is true for G k . Thus
lettericity is a monotone parameter w.r.t. the induced subgraph relation.
4 2-letter graphs
By Proposition 1, 2-letter graphs are bipartite, split, or cobipartite graphs. In this section
we characterize cobipartite 2-letter graphs as unbounded-interval graphs, and split 2-letter
graphs as threshold graphs. We also show how our representation helps enumerate the
nonisomorphic n-vertex graphs in these classes. For a fixed set of pairs P write
this is an equivalence relation in the set
\Sigma n of words of length n over \Sigma.
4.1 Unbounded-interval graphs
An unbounded-interval graph is the intersection graph of a family of intervals of infinite length
on the real line. We denote the set of unbounded-interval graphs by U . Unbounded-interval
graphs are studied in [10]. Complements of unbounded-interval graphs are studied in [4].
Example 2 Let I Fig. 2). The
intersection graph of these four intervals is the path P 4 , which is therefore an unbounded-
interval graph.
I 1
I 3
I 2
I 423
I 1 I 2 I 3 I 4
Figure
2: A family of unbounded intervals whose intersection graph is P 4 .
The following characterization of unbounded-interval graphs can be found in [10]:
Theorem 1 For a graph G, the following assertions are equivalent:
(ii) G is triangulated and G is bipartite,
(iii) G has no induced subgraphs isomorphic to K 3 , C 4 , or C 5 ,
RR;RLg.
In (iv), vertices corresponding to intervals unbounded on the left (resp. right) are labelled a
Fig. 2 shows the example
be the word obtained by reversing w and swapping L's and R's. Let
! be a rewrite relation defined by
It turns out that the reflexive-transitive closure of ! in \Sigma n coincides with the equivalence
relation - defined at the beginning of the section. This fact is used in [10] to show that the
number of nonisomorphic n-vertex unbounded-interval graphs is 2
4.2 Threshold graphs
A graph G is called threshold if there is a labelling f of its vertices by nonnegative inte-
gers, and an integer threshold t such that a set X ' V (G) is independent if and only if
t. We denote the set of threshold graphs by T .
Threshold graphs were introduced by Chv'atal and Hammer in [1] where the following
theorem is proved (see also [2], [7]):
Theorem 2 For a graph G, the following assertions are equivalent:
(ii) G has no induced subgraphs isomorphic to C 4 , C 4 , or P 4 ,
are the degrees of the nonisolated vertices of G, is the
set of all vertices of degree y, then x is adjacent to y iff
Here we characterize threshold graphs as 2-letter graphs.
Theorem 3 CSg.
Proof: Consider a word w 2 \Sigma , partitioned into blocks of successive C's and S's:
By changing the last letter
of w if necessary, we can assume that the last nonempty block of w has length at least two.
As both C and S have identical sets of left neighbors in P , such change does not affect G.
Let D i be the set of vertices of G corresponding to the i-th block of S's in w, and Dm\Gammai
the set of vertices corresponding to the i-th block of C's where m is the total number of
nonempty blocks in the subword C q . It is straightforward to verify that:
vertices within D i have identical degree, say distinct
vertices are adjacent iff m. By Theorem 2(iii), G is a threshold
graph.
Conversely, let G be a threshold graph. Partition V (G) into D 0 , D 1 , . , Dm as described
in Theorem 2(iii), and let d
dm=2e . It is straightforward to verify that G
Let ! be a rewrite relation defined by
It is easy to see that the reflexive-transitive closure of ! in \Sigma n coincides with the equivalence
relation - defined at the beginning of the section. From this it follows immediately that the
number of nonisomorphic n-vertex threshold graphs is 2
4.3 An overview of 2-letter graphs
Theorem 4 G
Proof:
Table
1 gives an overview of the possible classes of 2-letter graphs over
induced subgraphs, and their census. As K
the theorem follows. 2
elements elements of number of pairwise nonisomorphic
of P G \Sigma (P) Obs(G \Sigma (P)) n-vertex graphs in G \Sigma (P)
aa
ab U K 3
Table
1: 2-letter graphs (p and q denote nonnegative integers).
Corollary 3 All graphs on four or fewer vertices are 2-letter graphs.
Proof: According to Theorem 2(ii), all graphs on four or fewer vertices except C
and C 4 are threshold graphs. As C 4 the claim follows from
Theorem 4. 2
Corollary 4
Proof: From Theorem 4 and Table 1 it follows that the graphs not in G 2 have at least one
induced subgraph in each of the sets fC g. Checking
all 27 combinations and discarding redundant ones we see that such graphs contain at
least one of the following seven sets of induced subgraphs: fC
g. Thus a minimal forbidden induced subgraph for
G 2 can have at most 3 vertices. 2
Corollary 5 2-letter graphs can be recognized in polynomial time.
Proof: This follows from Theorem 4 because each of the classes T , U , U has a polynomial-time
recognition algorithm. 2
5 Lettericity of some n-vertex graphs
In this section we consider the lettericity of cycles, paths, and perfect matchings. By a
counting argument we show that for large n there are n-vertex graphs whose lettericity
exceeds 0:707 n.
5.1 Cycles
Call an independent set S in C n tight if ng for
some
. If a 2 \Sigma gives rise to an independent set S of size three or
more in G(P; w) then:
(i) S is tight,
(iii) the labels of the two vertices of G(P; w) which have both neighbors in S are distinct.
Proof: (i) Let R be a maximal run of consecutive vertices of C n which are not in S. If
R has two or more vertices then the labels of the two vertices of S adjacent to one of the
endpoints of R must be the leftmost and the rightmost a's in w. Hence there is at most one
such run, meaning that S is tight.
(ii) If S contains more than three vertices, it is tight by (i). W.l.g. assume that 0; 2; 4; 6 2
S. Then in w, the label of 1 (which is adjacent to 0 and 2, but not adjacent to 4 or must
be between the labels of 0; 2 and 4; 6, while the label of 3 must be between the labels of 2; 4
and 0; 6. As this is impossible,
(iii) By (i) and (ii), S is tight and has three vertices. W.l.g. assume that 4g. If
the vertices 1 and 3 are labelled the same, say b, these five vertices correspond to a subword
ababa of w where the left b is the label of 3 and forces ba 2 P, while the right b is the label
of 1 and forces ab 2 P. But then 1 and 3 would have degree three or more. It follows that
vertices 1 and 3 must be labelled differently. 2
Theorem 5 Let
Proof: First we prove that at least b n+4c letters are needed to obtain C n . Let C n
different letters. As n - 4, the largest clique in C n is of size
2. From Lemma 1(ii) it follows that each letter appears at most three times in w. Therefore
dn=3e, so the assertion is
proved. If It remains to show that
in the latter two cases letters do not suffice.
a)
Assume that w is a word consisting of k different letters whose letter graph is C 3k . By Lemma
1(ii), each letter gives rise to an independent set of size three. By Lemma 1(i) and (iii), the
vertices of C 3k must be (cyclically) labelled a 1
1 a 3
k a 2
2 a 3
3 a 3
k a 1
1 where superscripts
distinguish the three occurrences of each letter. It remains to see how these symbols could
be arranged linearly in w.
As a 3
k is adjacent to a 1
1 and a 2
2 is adjacent to a 2
1 and a 3
1 , it follows that a 2
must
be between a 1
1 and a 3
1 in w. W.l.g. assume that the arrangement of these symbols in w is
a 1
1 a 3
1 . By induction on i it can be shown that a 1
precedes a 2
which precedes a 3
in w, and
also that a 1
precedes a 1
Hence a 1
1 precedes all three occurrences of a k in
However, being adjacent to exactly two of the corresponding vertices this is impossible.
As before, assume that w is a word consisting of k different letters whose letter graph is
C . This is only possible if of the letters give rise to an independent set of size
three, and the remaining letter, say a rise to either a clique or an independent set of
size two. In case of a clique, an independent set bordering on it must have the intervening
two vertices labelled the same, contrary to Lemma 1(iii). So a 1 gives rise to an independent
set of size two.
By Lemma 1(i) and (iii), the only possible way to label (cyclically) the vertices of C 3k\Gamma1 is
a 1
1 a 3
k a 1
3 a 3
k a 1
1 where superscripts distinguish different occurrences of each letter.
It remains to see how these symbols could be arranged linearly in w. Similarly as in the case
a) we can establish that a 1
precedes a 2
which precedes a 3
i in w, for
that a 1
precedes a 1
Hence a 1
precedes all three occurrences of a k in w.
However, being adjacent to exactly one of the corresponding vertices this is impossible.
It remains to construct C n using no more than b n+4c letters. We distinguish three cases
w.r.t. n mod 3. In all three cases, the alphabet is
a)
k a 2
k a 3
k\Gamma2 where superscripts are added for
easier reference. Write t
i\Gamma2 . Then it is easy to check that G(P; w) is the cycle
k a 1
k of length 3k + 1.
k a 2
k a 3
k\Gamma3 . As be-
fore,
i\Gamma2 . Then it is easy to check that G(P; w) is the cycle
k a 1
k a 2
of length 3k. For construction is shown in
Fig. 1 (with a
c)
k a 2
k a 3
k\Gamma4 .
Write again t
i\Gamma2 . Then it is easy to check that G(P; w) is the cycle
k a 1
k a 2
k\Gamma2 a 1
k\Gamma2 of length 3k \Gamma 1. 2
5.2 Paths
. If a 2 \Sigma gives rise to an independent set S of size three or
more in G(P; w) then S is of one of the following types:
(a) f1; 3;
(b)
(c)
(d)
Proof: Similar to that of Lemma 1. 2
Theorem 6 b n+1c -
Proof: For the upper bound, we show how to construct P n using no more than b n+4c
letters. We distinguish two cases w.r.t. n mod 3.
a)
a 1
k a 2
k a 3
k\Gamma2 where superscripts are added for easier reference. Write
. Then it is easy to check that G(P; w) is the path t
k a 1
k of
length 3k + 1.
By Theorem 5, C n+1 can be constructed using letters. The same then goes for P n as
it is an induced subgraph of C n+1 .
For the lower bound, let P n
different letters. Lemma 2
implies that at most one letter can appear four times in w, while the rest can appear three
times at most. Therefore n - 4
Conjecture: If n - 3 then
5.3 Maximum lettericity of n-vertex graphs
Let l(n) denote the maximum lettericity of an n-vertex graph. Clearly,
2. As l(G) - z(G), the maximum cochromatic number of an n-vertex graph
(which is known to be of order n= log n [8]) constitutes a lower bound for l(n). But this is a
poor bound: we have seen that the lettericity of paths and cycles on n vertices is about n=3
which is much larger than n= log n when n is large. It is also easy to see that
and n=2. By a counting argument we now improve
this bound to l(n) ? 0:707 n, provided that n is large enough.
Theorem 7 For each ff !
there is an N such that for all n ? N there are n-vertex
graphs G with l(G) ? ff n.
Proof: Assume that l(G) - ff n for all graphs G on n vertices. Write
our assumption, all graphs on n vertices are k-letter graphs. There are 2 ( n) labelled graphs
on n vertices. Over a k-letter alphabet, there are k 2 pairs of letters, 2 k 2
sets of pairs of
letters, k n words of length n, and at most n! possible labellings of a graph on n vertices,
hence there are no more than n!
labelled k-letter graphs on n vertices. Therefore
Taking base 2 logarithms we have
n:
this is impossible when n is large. 2
As for a simple upper bound, Proposition 3(i) implies that l(n) - 2. It
is also not difficult to see that l(n) -
6 k-letter graphs and well-quasi-order
By deleting a vertex the lettericity of a graph can decrease by more than one: for example,
2. We need an upper bound on the extent of this decrease.
Proof: Let
g. Let a i 1
be the labels of the neighbors of v in w. Take \Sigma
are new symbols, and P
l ; a 0
l ; a j a l 2
rg. Denote by w 0 the word obtained from w by replacing the labels a i
of the neighbors of v by a 0
Theorem 8 The class G k of k-letter graphs is well-quasi-ordered by the induced subgraph
relation.
Proof: Fix an alphabet \Sigma of cardinality k and a set of pairs P ' \Sigma 2 . By Higman's Lemma
[9, Thm. 4.4], \Sigma is well-quasi-ordered by the (not necessarily contiguous) subword relation.
Clearly, z is a subword of w if and only if G(P; z) is an induced subgraph of G(P; w), hence
G \Sigma (P) is well-quasi-ordered by the induced subgraph relation. As G k is a union of finitely
many sets of the form G \Sigma (P) (one for each of the 2 k 2
possible P's) the conclusion follows. 2
Theorem 9 The sets of obstructions Obs(G \Sigma (P)) and Obs(G k ) are finite.
As Obs(G \Sigma (P)) is an antichain,
Theorem 8 implies that it is finite.
Finiteness of Obs(G k ) is proved in the same way. 2
Corollary 6 The graphs from G \Sigma (P) and G k are recognizable in polynomial time.
Proof: The relation H - i G is decidable in time O(n m ) where
jV (H)j. For fixed H this is polynomial in n. Thus by Theorem 9, checking that H 6- i G for
is given is a polynomial-time recognition
algorithm for G \Sigma (P) (resp. G k ). 2
Note that the proof of Corollary 6 is nonconstructive as the specification of the algorithm
given there is incomplete: the finite sets of obstructions for G \Sigma (P) and G k that are used by
the algorithm are, in general, unknown.
7 Conclusion
We conclude by listing some open problems.
Problem 1. Design efficient algorithms to recognize k-letter graphs for small fixed values
of k.
Problem 2. What is the time complexity of finding the lettericity of a given graph?
Problem 3. Find the maximal possible lettericity of an n-vertex graph, and the corresponding
extremal graphs.
Acknowledgements
The author is indebted to Bojan Mohar and Toma-z Pisanski for helping out with this paper
(in particular, Toma-z suggested that k-letter graphs should be recognizable in polynomial
time, and Bojan pointed out Theorem 7). He also wishes to thank the referees for their
careful reading of the paper and valuable suggestions.
--R
Aggregation of inequalities in integer programming
Induced subgraphs and well-quasi-ordering
Covering the edges with consecutive sets
Subgraphs and well-quasi-ordering
Stable sets versus independent sets
Algorithmic Graph Theory and Perfect Graphs
Some extremal results in cochromatic and dichromatic theory
Ordering by divisibility in abstract algebras
Intersection graphs of halflines and halfplanes
The theory of well-quasi-ordering: a frequently discovered concept
--TR
Intersection graphs of halflines and halfplanes
Induced subgraphs and well-quasi-ordering
Subgraphs and well-quasi-ordering
Stable sets versus independent sets | induced subgraph relation;lettericity;well-quasi-order |
606695 | Stability Analysis of Second-Order Switched Homogeneous Systems. | We study the stability of second-order switched homogeneous systems. Using the concept of generalized first integrals we explicitly characterize the "most destabilizing" switching-law and construct a Lyapunov function that yields an easily verifiable, necessary and sufficient condition for asymptotic stability. Using the duality between stability analysis and control synthesis, this also leads to a novel algorithm for designing a stabilizing switching controller. | Introduction
. We consider the switched homogeneous system:
_
are homogeneous functions (with equal
degree of homogeneity), and Co denotes the convex hull. An important special case
reduces to a switched linear system.
Switched systems appear in many elds of science ranging from economics to
electrical and mechanical engineering [15][18]. In particular, switched linear systems
were studied in the literature under various names, e.g., polytopic linear dieren-
tial inclusions [4], linear polysystems [6], bilinear systems [5], and uncertain linear
systems [20].
is an equilibrium point of (1.1). Analyzing the
stability of this equilibrium point is di-cult because the system admits innitely
many solutions for every initial value 1 .
Stability analysis of switched linear systems can be traced back to the 1940's
since it is closely related to the well-known absolute stability problem [4][19]. Current
approaches to stability analysis include (i) deriving su-cient but not necessary
and su-cient stability conditions, and (ii) deriving necessary and su-cient stability
conditions for the particular case of low-order systems. Popov's criterion, the circle
criterion [19, Chapter 5] and the positive-real lemma [4, Chapter 2] can all be considered
as examples of the rst approach. Many other su-cient conditions exist in
the literature 2 . Nevertheless, these conditions are su-cient but not necessary and
su-cient and are known to be rather conservative conditions.
Far more general results were derived for the second approach, namely, the particular
case of low-order linear switched systems. The basic idea is to single out the
\most unstable" solution ~
x(t) of (1.1), that is, a solution with the following property:
If ~ x(t) converges to the origin then so do all the solutions of (1.1). Then, all that is
left to analyze is the stability of this single solution (see, e.g., [3]).
Department of Theoretical Mathematics, Weizmann Institute of Science, Rehovot, Israel 76100.
(holcman@wisdom.weizmann.ac.il).
y (Corresponding author) Department of Electrical Engineering-Systems, Tel Aviv University, Israel
69978. (michaelm@eng.tau.ac.il).
An analysis of the computational complexity of some closely related problems can be found
in [2]
2 See, for example, the recent survey paper by Liberzon and Morse [11]
D. HOLCMAN AND M. MARGALIOT
Pyatnitskiy and Rapoport [16][17] were the rst to formulate the problem of nd-
ing the \most unstable" solution of (1.1) using a variartional approach. Applying the
maximum principle, they developed a characterization of this solution in terms of a
two-point boundary value problem. Their characterization is not explicit but, never-
theless, using tools from convex analysis they proved the following result. Let be the
collection of all the q-sets of linear functions fA xg for which (1.1) is asymptotically
stable, and denote the boundary 3 of by @. Pyatnitskiy and Rapoport
proved that if fA 1 then the \most unstable" solution of (1.1) is a
closed trajectory. Intuitively, this can be explained as follows. If fAx; Bxg 2 then,
by the denition of , ~
x(t) converges to the origin; if fAx; Bxg
is unbounded. Between these two extremes, that is, when fAx; Bxg 2 @, ~
x(t) is a
closed solution. This leads to a necessary and su-cient stability condition for second
and third-order switched linear systems [16][17], however, the condition is a nonlinear
equation in several unknowns and, since solving this equation turns out to be di-cult,
it cannot be used in practice.
Margaliot and Langholz [14] introduced the novel concept of generalized rst
integrals and used it to provide a dierent characterization of the closed trajectory.
Unlike Pyatnitskiy and Rapoport, the characterization is constructive and leads, for
second-order switched linear systems, to an easily veriable necessary and su-cient
stability condition. Furthermore, their approach yields an explicit Lyapunov function
for switched linear systems.
In the general homogeneous case, the functions f i () are nonlinear functions, and
therefore, the approaches used for switched linear systems cannot be applied. Filippov
derived a necessary and su-cient stability condition for second-order switched
homogeneous systems. However, his proof uses a Lyapunov function that is not constructed
explicitly.
In this paper we combine Fillipov's approach with the approach developed by
Margaliot and Langholz to provide a necessary and su-cient condition for asymptotic
stability of second-order switched homogeneous systems. We construct a suitable
explicit Lyapunov function and derive a condition that is easy to check in practice.
A closely related problem is the stabilization of a several unstable systems using
switching. This problem has recently regained new interest with the discovery that
there are systems that can be stabilized by hybrid controllers whereas they cannot
be stabilized by continuous state-feedback [18, Chapter 6]. To analyze the stability
of (1.1), we synthesize the \most unstable" solution ~
x(t) by switching between several
asymptotically stable systems. Designing a switching controller is equivalent to synthesizing
the \most stable" solution by switching between several unstable systems.
These problems are dual and, therefore, a solution of the rst is also a solution of the
second. Consequently, we use our stability analysis to develop a novel procedure for
designing a stabilizing switching controller for second-order homogeneous systems.
The rest of this paper is organized as follows. Section 2 includes some notations
and assumptions. Section 3 develops the generalized rst integral which will serve as
our main analysis tool. Section 4 analyzes the sets and @. Section 5 provides an
explicit characterization of the \most destabilizing" switching-law. Section 6 presents
an easily veriable necessary and su-cient stability condition. Section 7 describes a
new algorithm for designing a switching controller. The nal section summarizes.
3 The set is open [17]
SECOND-ORDER SWITCHED HOMOGENEOUS SYSTEMS 3
2. Notations and Assumptions. For > 1, let
that is, the set of homogeneous functions of degree . We denote by E the set of
Consider the system _
. Transforming to
polar coordinates
we get
_
where R() and A() are homogeneous functions of degree +1 in the variables cos()
and sin().
Following ([9], Chapter III), we analyze the stability of (2.1) by considering two
cases. If A() has no zeros then the origin is a focus and (2.1) yields:
R
is periodic in with period 2, and h :=2
R 2R(u)
If A has zeros say, then the line = is a solution of (2.1) (the origin
is a node) and along this line r(t)
Hence, if ES := ff 2
asymptotically stableg, then ES
ES N
, where 4
has no zeros and sgn(h) 6= sgn(A)g
ES N
such that
Given f its dierential at x by (Df)(x) :=
The dierential's norm is jj(Df)(x)jj := sup h2R
vector norm on R 2 . The distance between two functions f
is dened by [10]:
(jjf
Note that (E ; d(; )) is a Banach space and that in the topology induced by d(; )
the set ES is open.
For simplicity 5 , we consider the dierential inclusion (1.1) with 2:
_
Given an initial condition x 0 , a solution of (2.3) is an absolutely continuous
function x(t), with almost all t. Clearly, there is
4 Here F stands for focus and N for node
5 Our results can be easily generalized to the case q > 2
4 D. HOLCMAN AND M. MARGALIOT
an innite number of solutions for any initial condition. To dierentiate the possible
solutions we use the concept of a switching-law.
Definition 2.1. A switching-law is a piecewise constant function : [0;
[0; 1]. We refer to the solution of _
as the solution
corresponding to the switching-law .
The solution x(t) 0 is said to be uniformly 6 locally asymptotically stable if:
Given any > 0, there exists -() > 0 such that every solution of (2.3) with
There exists c > 0 such that every solution of (2.3) satises lim
Since f and g are homogeneous, local asymptotic stability of (2.3) implies global
asymptotic stability. Hence, when the above conditions hold, the system is uniformly
globally asymptotically stable (UGAS).
Definition 2.2. A set P R 2 is an invariant set of (2.3) if every solution x(t),
with
Definition 2.3. We will say
that
singular if there
exists an invariant set that does not contain an open neighborhood of the origin.
We assume from here on that
Assumption 1. The
set
(x) is not singular.
The role of Assumption 1 will become clear in the proof of Lemma 5.4 below. Note
that it is easy to check if the assumption holds by transforming the two systems _
f (x) and _
to polar coordinates and examining the set of points where _
for each system. For example, if there exists a line l that is an invariant set for both
_
l is an invariant set of (2.3) and Assumption 1 does not
hold.
To make the stability analysis nontrivial, we also assume
Assumption 2. For any xed 2 [0; 1], the origin is a globally asymptotically
stable equilibrium point of _
3. The Generalized First Integral. If the system
_
is Hamiltonian [8] then it admits a classical rst integral, that is, a function H(x)
which satises H(x(t)) H(x(0)) along the trajectories of (3.1). In this case, the
study of (3.1) is greatly simplied since its trajectories are nothing but the contours
const. In particular, it turns out that the rst integral provides a
crucial analysis tool for switched linear systems [14]. The purpose of this section is to
extend this idea to the case where f 2 ES and, therefore, (3.1) is not Hamiltonian.
dv
dx2
are both homogeneous functions of degree and, therefore,
the ratio f2 (x1 ;x2 )
is a function of v only which we denote by (v). Hence, along the
trajectories of (3.1): dv
x1 , that is, e
R dv
const. Thus, we dene
6 The term \uniform" is used here to describe uniformity with respect to switching signals
SECOND-ORDER SWITCHED HOMOGENEOUS SYSTEMS 5
the generalized rst integral of (3.1) by
2k
where L(v) :=
R dv
and k is a positive integer. Note that we can write
substituting
Let S be the collection of points where H(x not dened or not continuous
then, by construction, H along the trajectories
of (3.1). If classical rst integral of the system. In general, how-
ever, S 6= ;. Nevertheless, this does not imply that H cannot be used in the analysis
of (3.1). Consider, for example, the case where S is a line and a trajectory x(t) of (3.1)
can cross S but not stay on S. Then, H(x(t)) will remain constant except perhaps
at a crossing time 7 where its value can \jump". Thus, a trajectory of the system is
a concatenation of several contours of H . This motivates the term generalized rst
integral.
To clarify the relationship between the trajectories of _
and the contours
const, we consider an example.
Example 1. Consider the system
_
_
Here (3.2) yields
2k
, and using
In this case 0g. It
is easy to verify that l 1 is an invariant set of (3.3), that is, x(t) \ l
the trivial trajectory that starts and stays on l 1 ). Furthermore, it is easy to see that
a trajectory of (3.3) cannot stay on the line l 2 .
Fig. 3.1 shows the trajectory x(t) of (3.3) for x Fig. 3.2 displays H(x(t))
as a function of time. It may be seen that H(x(t)) is a piecewise constant function
that attains two values. Note that the \jump" in H(x(t)) occurs when x
is, when x(t) 2 S.
4. The Boundary of Stability. Let be the set of all pairs (f ; g) for which (2.3)
is UGAS. In this section we study and its boundary @. Our rst result, whose
proof is given in the appendix, is an inverse Lyapunov theorem.
Lemma 4.1. If (f ; g) 2 , then there exists a C 1 positive-denite function
such that for all x
Furthermore, V (x) is positively homogeneous of degree one 8 .
Lemma 4.2. is an open cone.
Proof. Let (f ; g) 2 . Clearly, (cf ; cg) 2 for all c > 0. Hence, is a cone.
7 That is, a time t 0
such that x(t 0
8 That is,
6 D. HOLCMAN AND M. MARGALIOT
Fig. 3.1. The trajectory of (3.3) for x 0
Fig. 3.2. H(x(t)) as a function of time.
To prove that is open, we use the common Lyapunov function V from Lemma 4.1.
Denote
is a closed curve encircling the origin. Hence, there
exists a < 0 such that for all x 2
rV (x)f(x) < a and rV (x)g(x) < a
If ~ f 2 ES and ~
are such that
su-ciently small, then for all x 2
It follows from the homogeneity of V , ~ f , and ~
g that ( ~ f ; ~
5. The Worst-Case Switching Law. In this section we provide two explicit
characterizations of the switching-law that yields the \most unstable" solution of (2.3).
SECOND-ORDER SWITCHED HOMOGENEOUS SYSTEMS 7
be the generalized rst integral of _
Definition 5.1. Dene the worst-case switching-law (WCSL) by:
We denote
so the solution corresponding to WCSL satises _
h(x). Note that WCSL is a
state-dependent switching-law and that since
or respectively, that is, the vertices
of
. Furthermore, it is easy to see
that h(x) is homogeneous of degree .
Intuitively, WCSL can be explained as follows. Consider a point x where f (x)
and g(x) are as shown in Fig. 5.1. A solution of _
follows the contour H f
const, whereas, a solution of _
contour going further away from
the origin. In this case, rH f (x)g(x) > 0 so WCSL is which corresponds
to setting _
\pushes" the trajectory away from the origin as
much as possible.
const
Fig. 5.1. Geometrical explanation of WCSL when rH f (x)g(x) > 0
Note that the denition of WCSL using (5.1) is meaningful only for x
since rH f (x) is not dened for x 2 S. However, extending the denition of WCSL to
any x 2 R 2 is immediate since x 2 S implies one of two cases. In the rst case, x 2 l,
where l is a line in R 2 which is an invariant set of _
(with
c < 0 since f is asymptotically stable) so clearly WCSL must use g. In the second
case, the trajectory of _
so the value of the switching-law on the
single point x can be chosen arbitrarily.
We expect WCSL to remain unchanged if we swap the roles of f and g. Indeed,
this is guaranteed by the following lemma whose proof is given in the Appendix.
8 D. HOLCMAN AND M. MARGALIOT
Lemma 5.2. For all x 2 D
where sgn() is the sign function.
We can now state the main result of this section.
Theorem 5.3. (f ; g) 2 @ if and only if the solution corresponding to WCSL is
closed 9 .
Proof. Denote the solution corresponding to WCSL by x(t) and suppose that x(t)
is closed. Let
be the closed curve is the smallest
time such that x(T that using the explicit construction of (x)
(see (5.1)) we can easily dene
explicitly as a concatenation of several contours
of H f (x) and H g (x). Note also that the switching between _
takes place at points x where rH f (see (5.1)), that is, when g(x) and f (x)
are collinear. Hence, the curve
has no corners.
We dene the function V (x) by V
that is, the contours of V are obtained by scaling
(see [1]). The function V (x) is
positively homogeneous (that is, for any c 0: V radially unbounded,
and dierentiable on R 2 n f0g.
Note that both f (x) and g (x) belong to E . We use V (x) to analyze the stability
of the perturbed system _
(x)g. Consider the derivative of V
along the trajectories of _
_
t. If at some x, V (x) corresponds to a contour H f
then rV and, by the denition of WCSL (see (5.1)), rV (x)g(x) 0
so _
corresponds to a contour H g
so rV
any < 0 we have
_
since this holds for for all x and all (t) 2 [0; 1], we get that for <
0:
On the other hand, for > 0 and
_
since this holds for all x, _
x (x) admits an unbounded solution for > 0. The
derivations above hold for arbitrarily small and,
therefore,
For the opposite direction, assume that (f ; g) 2 @, and let x(t) be the solution
corresponding to WCSL, that is, x(t) satises _
To prove that x(t) is a closed trajectory we use the following Lemma, whose proof
appears in the Appendix.
9 We omit specifying the initial condition, because the fact that h(x) is homogeneous implies
that, if the solution starting at some x 0
is closed, then all solutions are closed.
SECOND-ORDER SWITCHED HOMOGENEOUS SYSTEMS 9
Lemma 5.4. If (f ; g) 2 @ then the solution corresponding to WCSL rotates
around the origin.
Thus, for a given x 0 6= 0, there exists t 1 > 0 such that x(t), with
since h(x) is homogeneous, we get: x(nt 1
We consider two cases. If c > 1 then x(t) is unbounded and using the
homogeneity of h(x) we conclude that 0 is a (spiral) source. It follows from the
theory of structural stability (see, e.g., [10]) that there exists an > 0 such that for
all
g) with the origin is a source of the perturbed
dynamical system _
This implies that (f ; g) 62 @ which
is a contradiction.
If c < 1 then x(t) converges to the origin and, by the construction of WCSL, so
does any other solution, so (f ; g) 2 which is again a contradiction. Hence,
that is, x(t) is closed.
The characterization of WCSL using the generalized rst integrals leads to a
simple and constructive proof of Theorem 5.3. However, to actually check whether
the solution corresponding to WCSL is closed, a characterization of WCSL in polar
coordinates is more suitable.
Representing (2.3) in polar coordinates, we get
_
r
_
cos sin
sin
r
cos
r
If (f ; g) 2 @ then WCSL yields a closed solution. By using the transformation
necessary) we may always assume that this solution rotates around the
origin in a counter clockwise direction, that is, _
that this implies that if at some point x the trajectories of one of the systems are in
the clockwise direction, then WCSL will use the second system. Hence, determining
WCSL is non-trivial only at points where the trajectories of both systems rotate in the
same direction and we assume from here on that both rotate in a clockwise direction
(note that this explains why in Lemma 5.2 it is enough to consider x 2 D).
so F is a parameterization of the set of directions
in
for which _
> 0.
For any (r; ) we dene the switching-law
_
r
_
that is, is the switching-law that selects, among all the directions which yield _
> 0,
the direction that maximizes d ln r
d . Using (5.4), we get
(cos sin )j (r; )
sin cos )j (r; )
Let
(cos sin )j (r; )
sin cos )j (r; )
so that along the trajectory corresponding to : 1
r _
m. Note that since f and g
are homogeneous,
D. HOLCMAN AND M. MARGALIOT
It is easy to verify that the function q(y) := ay+b(1 y)
(where c and
d are such that the denominator is never zero) is monotonic and, therefore, (r; )
in (5.6) is always 0 or 1 and m(r; ) in (5.7) is always one of the two values:
(cos sin
sin cos
(cos sin
sin cos
respectively.
The next lemma, whose proof is given in the Appendix, shows that the switching-
law is just the worst-case switching-law .
Lemma 5.5. The switching-law yields a closed solution if and only if yields
a closed solution.
Let
I :=
Z 2m()d
d
d
where (r(t); (t)) is the solution corresponding to the switching-law , and T is the
time needed to complete a rotation around the origin. This solution is closed if and
only ln(r(T Combining this with Lemma 5.5 and Theorem 5.3 we
immediately obtain
Theorem 5.6. (f ; g) 2 @ if and only if I = 0.
It is easy to calculate I numerically and, therefore, Theorem 5.6 provides us with
a simple recipe for determining whether (f ; g) 2 @. However, note that we assumed
throughout that the closed solution of the system rotates in a counter clockwise di-
rection. Thus, to use Theorem 5.6 correctly, I has to be compute twice: First, for
the original system, and then for the transformed system r
value by I 0 ). (f ; g) 2 @ if and only if max(I ; I 0 In this way, we nd whether the
system has a closed trajectory, rotating around the origin in a clockwise or counter
clockwise direction.
The following example demonstrates the use of Theorem 5.6.
Example 2. (Detecting the boundary of stability).
Consider the system:
_
where
It is easy to verify that f 2 ES N
3 and since g 0
have
. The
problem is to determine the smallest k > 0 such that (f(x); g k (x)) 2 @ .
Transforming to polar coordinates we get:
sin 3 2 cos 3
cos sin 2
cos
SECOND-ORDER SWITCHED HOMOGENEOUS SYSTEMS 11
Fig. 5.2. I(k) as a function of k
so
sin 3 2 cos 3
cos sin 2
cos
and
(cos sin )j (r; )
sin cos )j (r; )
d
where F () includes 0 if ( sin cos )j 0
Note that although j is a function of both r and , the integrand in (5.11) is a
function of (and k) but not of r.
We calculated I(k) numerically for dierent values of k. The results are shown
in Fig. 5.2. The value k for which I(k
(to four-digit accuracy) and it may be seen that for k <
We repeated the computation for the transformed system
and found that there exists no closed solution rotating around the origin in a clockwise
direction. Hence, the system (5.9) and (5.10) is UGAS for all k 2 [0; k ) and unstable
for all k > k .
The WCSL (see (5.6)) for
Fig. 5.3 depicts the solution of the system given by (5.9) and (5.10) with
1:3439, WCSL (5.12), and x It may be seen that the solution is a closed
trajectory, as expected. Note that this trajectory is not convex which implies that the
Lyapunov function used in the proof of Theorem 5.3 (see (5.3)) is not convex. This
12 D. HOLCMAN AND M. MARGALIOT
-0.4
Fig. 5.3. The solution of (5.9) and (5.10) for
is a phenomenon that is unique to nonlinear systems. For switched linear systems the
closed trajectory is convex and, therefore, so is the Lyapunov function V that yields
a su-cient and necessary stability condition [14].
6. Stability Analysis. In this section we transform the original problem of
analyzing the stability of (2.3) to that of detecting the boundary of stability @. The
later problem was solved in section 5.
Given
a new homogeneous function h k (x) with the
following properties: (1) h 0
g. One possible example that satises
the above is:
Consider the switched homogeneous system
_
The absolute stability problem is: nd the smallest k > 0, when it exists, such
that
Noting
that
and
k1
k2 for all k 1 < k 2 , we immediately obtain the following result.
Lemma 6.1. The system (2.3) is UGAS if and only if k > 1.
Thus, we can always transform the problem of analyzing the stability of a switched
dynamical system to an absolute stability problem. We already know how to solve
the latter problem for second-order homogeneous systems. To illustrate this consider
the following example.
Example 3. Consider the system (2.3)
It is easy to verify that f (x) and g(x) belong to ES 3 and that both Assumptions 1
and 2 are satised.
SECOND-ORDER SWITCHED HOMOGENEOUS SYSTEMS 13
To analyze the stability of the system we use Lemma 6.1. Dening
we must nd the smallest k such that (f ; h k ) 2 @ . We already calculated k in
Example 2 and found that k 1. Hence, the system (2.3) with f and g
given in (6.2) is UGAS.
7. Designing a Switching Controller. In this section we employ our results to
derive an algorithm for designing a switching controller for stabilizing homogeneous
systems. To be concrete, we focus on linear systems rather than on the general
homogeneous case. Hence, consider the system
_
where K 1 and K 2 are given matrices that represent constraints 10 on the possible
controls. We would like to design a stabilizing state-feedback controller
that satises the constraint u(t) 2 U for all t.
We assume that for any xed matrix K 2 the matrix A
is strictly unstable and, therefore, a linear controller not stabilize the
system. However, it is still possible that a switching controller will stabilize the system
and designing such a controller (if one exists) is the purpose of this section.
Roughly speaking, we are trying to nd a switching-law that yields an asymptotically
stable solution of _
each matrix
in
is strictly unstable. Using the transformation t, we see that such a solution
exists if and only if this switching-law yields an unstable solution of _
Clearly, every matrix
in
is asymptotically stable.
Hence, we obtain the main result of this section.
Theorem 7.1. Let be WCSL for the system _
x be the corresponding solution. There exists a switching controller that
asymptotically stabilizes (7.1) if and only if ~
x is unbounded and, in this case,
stabilizing controller.
Note that Theorem 7.1 provides an algorithm for designing a stabilizing switching
controller whenever such a controller exists. We already solved the problem of
analyzing ~
x for second-order systems.
Example 4. (Designing a stabilizing switching controller).
Consider the system (7.1) with
is a constant. It is easy to verify that for any xed K 2 the
unstable and ,therefore, no linear controller
the system. Therefore, we design a switching controller. By Theorem 7.1 we must
analyze the stability of the switched system (6.1) with
x:
example, by the physical limitations of the actuators
14 D. HOLCMAN AND M. MARGALIOT
Transforming _
to polar coordinates we get:
_
r
_
(cos sin )r sin
whereas _
becomes:
_
r
_
sin )r sin
Clearly, the solutions of both these systems always rotate in a counter clockwise direction
> 0 for all ) and, therefore, for all , we have
where
(cos sin ) sin
It is easily veried that m 1 only if tan 0. Hence, WCSL is:
and
Z
Z 3=2
Computing numerically we nd that the value of k for which I = 0 is k
Hence, there exists a switching controller that asymptotically stabilizes (7.1) and (7.2)
if and only if k > 6:98513 and
is a stabilizing controller.
Fig. 7.1 depicts the trajectory of the closed-loop system given by (7.1) and (7.2)
with the switching controller (7.3), and x As we can see, the
system is indeed asymptotically stable.
8.
Summary
. We presented a new approach to stability analysis of second-order
switched homogeneous systems based on the idea of generalized rst integrals. Our
approach leads to an explicit Lyapunov function that provides an easily veriable
necessary and su-cient stability condition.
Using our stability analysis, we designed a novel algorithm for constructing a
switching controller for stabilizing second-order homogeneous systems. The algorithm
determines whether the system can be stabilized using switching, and if the answer
is a-rmative, outputs a suitable controller.
Interesting directions for further research include the complete characterization
of the boundary of stability @ and the study of higher-order switched homogeneous
systems.
Acknowledgments
. We thank the anonymous reviewers for many helpful comments
SECOND-ORDER SWITCHED HOMOGENEOUS SYSTEMS 15
Fig. 7.1. Trajectory of the closed-loop system with the switching controller with x 0
Appendix
.
Proof of Lemma 4.1. The existence of a common Lyapunov function V 0 (x)
follows from Theorem 3.1 in [13] (see also [12]). However, V 0 is not necessarily homo-
geneous. Denote
is a closed curve encircling the origin. We
dene a new function V (x) by V
that is, the contours of V are obtained by scaling
(see [1]). V (x) is dierentiable
on R 2 n f0g, positively homogeneous of order one, and radially unbounded.
For any x 2
we have rV using the homogeneity
of V (x) and f (x) this holds for any x 2 R 2 n f0g. Similarly, rV (x)g(x) < 0 for
all x 2 R 2 n f0g.
Proof of Lemma 5.2. Let
and
. These two
vectors form an orthonormal basis of R 2 and, therefore,
and (rH g (x))
For any x 2 D we have a 1 > 0 and since rH f (x) (rH g (x)) is orthogonal to f (x)
(g(x)), we also have b 2 > 0. Substituting in (8.1) yields sgn(a 2 which is
just (5.2).
Proof of Lemma 5.4. The system _
homogeneous and we can represent
it in polar coordinates as in Eq. (2.1). If
the solution corresponding to WCSL follows the line l := then the
solution follows the line l to the origin. However, by the denition of WCSL this is
only possible if both the solutions of _
coincide with the line l.
D. HOLCMAN AND M. MARGALIOT
Thus, the line l is an invariant set of the system which is a contradiction to Assumption
1. If R() 0, then we get a contradiction to Assumption 2. Hence, A() 6= 0
for all 2 [0; 2] and, therefore, there exists c > 0 such that A() > c or A() < c
for all 2 [0; 2]. Thus, the solution rotates around the origin.
Proof of Lemma 5.5. Suppose that WCSL yields a closed trajectory ~ x(t) that
rotates around the origin in a counter clockwise direction ( _
> 0). Assume that at
some point x along this trajectory,
Note that by the denition of the generalized rst integral: rH f
any x 2 R 2 n S. This implies that rH f
so (8.2) yields
Let be the polar coordinates of x. Since ~
rotates around the origin in a
counter clockwise direction, and satises _
at x, we have ( sin cos
on the other hand, ( sin cos then by the denition of (see
Eq. (5.6)), only if:
(cos sin
sin cos
(cos sin
sin cos
Simplifying, we see that (8.4) is equivalent to f 1
which is just Eq. (8.3), hence, we proved that
and only if
--R
Set invariance in control
A survey of computational complexity results in systems and control
Stability of planar switched systems: The linear single input case
Linear Matrix Inequalities in System and Control Theory
A converse Lyapunov theorem for a class of dynamical systems which undergo switching
Stability conditions in homogeneous systems with arbitrary regime switching
Stability of Motion
Basic problems in stability and design of switched systems
A smooth converse Lyapunov theorem for robust stability
A converse Lyapunov theorem for nonlinear switched systems
Necessary and su-cient conditions for absolute stability: The case of second-order systems
Control Using Logic-Based Switching
Criteria of asymptotic stability of di
An Introduction to Hybrid Dynamical Systems
Nonlinear Systems Analysis
Nonquadratic Lyapunov functions for robust stability analysis of linear uncertain systems
--TR | absolute stability;robust stability;hybrid control;hybrid systems;switched linear systems |
606706 | Computing the Smoothness Exponent of a Symmetric Multivariate Refinable Function. | Smoothness and symmetry are two important properties of a refinable function. It is known that the Sobolev smoothness exponent of a refinable function can be estimated by computing the spectral radius of a certain finite matrix which is generated from a mask. However, the increase of dimension and the support of a mask tremendously increase the size of the matrix and therefore make the computation very expensive. In this paper, we shall present a simple and efficient algorithm for the numerical computation of the smoothness exponent of a symmetric refinable function with a general dilation matrix. By taking into account the symmetry of a refinable function, our algorithm greatly reduces the size of the matrix and enables us to numerically compute the Sobolev smoothness exponents of a large class of symmetric refinable functions. Step-by-step numerically stable algorithms are given. To illustrate our results by performing some numerical experiments, we construct a family of dyadic interpolatory masks in any dimension, and we compute the smoothness exponents of their refinable functions in dimension three. Several examples will also be presented for computing smoothness exponents of symmetric refinable functions on the quincunx lattice and on the hexagonal lattice. | Introduction
. A d d integer matrix M is called a dilation matrix if the
condition holds. A dilation matrix M is isotropic if all the eigen-values
of M have the same modulus. We say that a is a mask on Z d if a is a nitely
supported sequence on Z d such that
1. Wavelets are derived from
renable functions via a standard multiresolution technique. A renable function
is a solution to the following renement equation
2Z d
where a is a mask and M is a dilation matrix. For a mask a on Z d and a dd dilation
matrix M , it is known ([2]) that there exists a unique compactly supported distributional
solution, denoted by M
a throughout the paper, to the renement equation
(1.1) such that ^
a where the Fourier transform of f 2 L 1 (R d ) is dened to
be
Z
R d
f(x)e ix dx; 2 R d
and can be naturally extended to tempered distributions. When the mask a and
dilation matrix M are clear from the context, we write instead of M
a for simplicity.
Symmetric multivariate wavelets and renable functions have proved to be very useful
in many applications. For example, 2D renable functions and wavelets have been
widely used in subdivision surfaces and image/mesh compression while 3D renable
functions have been used in subdivision volumes, animation and video processing, etc.
The research was supported by NSERC Canada under Grant G121210654 and by Alberta
Innovation and Science REE under Grant G227120136.
y Department of Mathematical and Statistical Sciences, University of Alberta, Edmonton, Alberta,
Canada T6G 2G1. E-mail: bhan@math.ualberta.ca, URL: http://www.ualberta.ca/bhan
Han
For a compactly supported function in R d , we say that the shifts of are stable
if for every 2 R d , b
. For a function 2 L 2 (R d ), its
smoothness exponent is dened to be
Z
R d
Smoothness is one of the most important properties of a wavelet system. Therefore,
it is of great importance to have algorithms for the numerical computation of the
smoothness exponent of a renable function. Let a be a mask and M be a dilation
matrix. We denote k 1 the set of all polynomials of total degree less than k. By
convention, 1 is the empty set. We say that a satises the sum rules of order k
with respect to the lattice MZ d if
2MZ d
2MZ d
Dene a new sequence b from the mask a by
2Z d
the linear space of all nitely supported sequences on Z d . For a
subset K of Z d , by '(K) we denote the linear space of all nitely supported sequences
on Z d that vanish outside the set K.
The transition operator T b;M associated with the sequence b and the dilation
matrix M is dened by
2Z d
(R d ) be a renable function with a nitely supported mask a and a
dilation matrix M such that the shifts of are stable and a satises the sum rules of
order k but not k + 1. Dene the
set
by
\Z d and
and dene the slightly smaller subspace V 2k 1 of
'(
b;M ) to be
'(
2Z d
When M is isotropic, it was demonstrated in [4, 5, 6, 10, 21, 23, 24, 26, 27, 33, 35] in
various forms under various conditions that
) is the spectral radius of the operator T b;M acting on the nite
dimensional T b;M -invariant subspace V 2k 1 of
'(
However, from the point of view of numerical computation, there are some diculties
in obtaining the Sobolev smoothness exponent of a renable function via (1.7)
by computing the quantity (T b;M
due to the following considerations:
Computing the Smoothness Exponent of a Symmetric Multivariate Renable Function 3
D1. It is not easy to nd a simple basis for the space V 2k 1 by a numerically
stable procedure to obtain a representation matrix of T b;M under such a basis.
Theoretically speaking, if some elements in a numerically found basis of V 2k 1
cannot satisfy the equality in (1.6) exactly, then it will dramatically change
the spectral radius since in general T b;M has signicantly larger eigenvalues
outside the subspace V 2k 1 .
D2. When the dimension is greater than one and even when the mask has a relatively
small support, in general, the dimensions of the spaces V 2k 1 and
'(
b;M are
very large. For example, for a 3D mask with support [ 7; 7] 3 and sum rules
of order 4, we have dim(V 7
dim('(
This makes
the numerical computation using (1.7) very expensive or even impossible.
D3. In order to obtain the exact Sobolev smoothness exponent by (1.7), we have
to check the assumption that the shifts of M
a are stable which is a far from
trivial condition to be veried.
Fortunately, the di-culty in D1 was successfully overcome in Jia and Zhang [25],
where they demonstrated that (T b;M
) is the largest value in modulus in the
set consisting of all the eigenvalues of T b;M j
'(
excluding some known special
eigenvalues. Note that
'(
b;M ) has a simple basis f- : b;M g, where -
and - nfg.
On the other hand, both symmetry and smoothness of a wavelet basis are very
important and much desired properties in many applications. It is one of the purposes
in this paper to try to overcome the di-culty in D2 for a symmetric renable function.
We shall demonstrate in Algorithm 2.1 in Section 2 that we can compute the Sobolev
smoothness exponent of a symmetric renable function by using a much smaller space
than using the space
'(
b;M ). In Section 3, we shall see that for many renable
functions, it is not necessary to directly verify the stability assumption since they are
already implicitly implied by the computation. Therefore, the di-culty in D3 does
not exist at all for many renable functions (almost all interesting known examples
fall into this class).
To give the reader some idea about how symmetry can be of help in computing the
smoothness exponents of symmetric renable functions, we give the following
comparison result in Table 1. See Section 2 for more detail and explanation of Table 1.
Table
The last two rows indicate the matrix sizes in computing the Sobolev smoothness exponents of
symmetric renable functions using both the method in [25] and the method in Algorithm 2.1 in
Section 2 in this paper. This table demonstrates that Algorithm 2.1 can greatly reduce the size of
the matrix in computing the Sobolev smoothness exponent of a symmetric renable function.
Mask 4D mask 3D mask 2D mask 2D mask 2D mask
Symmetry full axes full axes hexagonal full axes hexagonal
Dilation matrix 2I 4 2I 3 2I 2
Method in [25] 194481 24389 8911 5601 > 3241
Algorithm 2.1 715 560 756 707 294
Masks and renable functions with extremely large supports may be rarely used in
real world applications. For a given mask which is of interest in applications, very often
there are some free parameters in the mask and one needs to optimize the smoothness
exponent of its renable function ([9, 12, 15, 17, 28, 30]). The e-cient algorithms
4 Bin Han
proposed in this paper will be of help for such a smoothness optimization problem. On
the other hand, a renable function vector satises the renement equation (1.1) with
a matrix mask of multiplicity r. A matrix mask of multiplicity r is a sequence of r r
matrices on Z d (Masks discussed in this paper correspond to are called scalar
masks). Very recently, as demonstrated in [17], multivariate renable function vectors
with short support and symmetry are of interest in computer aided geometric design
(CAGD) and in numerical solutions to partial dierential equations. Let M be the
quincunx dilation matrix (the fourth dilation matrix in Table 1) and let a be a matrix
mask of multiplicity 3 with support [ masks of order 1
discussed in [17] are examples of such masks which often have many free parameters
and are useful in CAGD). In order to compute the Sobolev smoothness exponent of its
renable function vector with such a small mask, without using symmetry, we found
that one has to deal with a 1161 1161 matrix (also see [23]). As a consequence,
even in low dimensions and for masks with small supports, it is very important to
take into account the symmetry of a renable function (vector) in algorithms for the
numerical computation of its smoothness exponent. Though for simplicity we only
consider scalar masks here, results in this paper can be generalized to matrix masks
and renable function vectors which will be discussed elsewhere.
The structure of the paper is as follows. In Section 2, we shall present step
by step numerically stable and e-cient algorithms for the numerical computation of
the Sobolev smoothness exponent of a symmetric renable function. In addition, an
algorithm for computing the Holder smoothness exponent of a symmetric renable
function will be given in Section 2 provided that the symbol of its mask is nonnegative.
In Section 3, we shall study the relation of the spectral radius of a certain operator
acting on dierent spaces. Such analysis enables us to overcome the di-culty in D3
for a large class of masks. In Section 4, we shall apply the results in Sections 2 and 3
to several examples including renable functions on quincunx lattice and hexagonal
lattice. We shall also present a C 2
3-interpolatory subdivision scheme in Section 4.
Next, we shall generalize the well known univariate interpolatory masks in Deslauriers
and Dubuc [8] and the bivariate interpolatory masks in [15] to any dimension.
Finally, we shall use the results in Sections 2 and 3 to compute Sobolev smoothness
exponents of interpolating renable functions associated with such interpolatory
masks in dimension three.
Programs for computing the Sobolev and Holder smoothness exponents of symmetric
renable functions based on the Algorithms 2.1 and 2.5 in Section 2, which
come without warranty and are not yet optimized with respect to user interface, can
be downloaded at http://www.ualberta.ca/bhan.
2. Computing smoothness exponent using symmetry. In this section, taking
into account the symmetry, we shall present an e-cient algorithm for the numerical
computation of the Sobolev smoothness exponent of a symmetric multivariate
renable function with a general dilation matrix. As the main result in this section,
Algorithms 2.1 and 2.5 are quite simple and can be easily implemented, though their
proofs and some notation are relatively technical.
Before proceeding further, let us introduce some notation and necessary back-
ground. Let N 0 denote all the nonnegative integers. For
d for
For
(R d
Computing the Smoothness Exponent of a Symmetric Multivariate Renable Function 5
For
ed , where e j is the jth coordinate unit
vector in R d . Let - 0 denote the sequence such that
nf0g. For norm is dened to be kuk p := (
Let M be a d d dilation matrix and a be a mask on Z d . Dene the subdivision
operator
2Z d
lim
kr S n
do
Let M be a dilation matrix and max be the spectral radius of M (When M is
isotropic, then When a mask a satises the sum rules of order k
but not k + 1, we dene the following important quantity:
The above quantity p (a; M) plays a very important role in characterizing the convergence
of a subdivision scheme in a Sobolev space and in characterizing the L p
smoothness exponent of a renable function.
The L p smoothness of f 2 L p (R d ) is measured by its L p smoothness exponent:
constant C and for large enough positive integer n
When 2, the above denition of 2 (f) agrees with the denition in (1.2). By
generalizing the results in [4, 5, 6, 10, 12, 21, 24, 26, 27, 32, 33, 35] and references
therein, we have
a
and the equality holds when the shifts of M
a are stable and M is an isotropic dilation
matrix. When M is a general dilation matrix and the shifts of M
a are stable, as
demonstrated in [5] for the case only have the estimate
log
a
where min := min 16j6d being
all the eigenvalues of M . As pointed out in [5], the usual Sobolev smoothness dened
in (1.2) and (2.3) is closely related to isotropic dilations and anisotropic Sobolev spaces
are needed in the case of an anisotropic dilation matrix. See [5] for more detail on
this issue.
So, to compute the Sobolev smoothness exponent of a renable function, we need
to compute 2 (a; M) and therefore, to compute k (a; M; 2). It is the purpose of this
section to discuss how to e-ciently compute k (a; M; 2) when a is a symmetric mask.
Let be a nite subset of integer matrices whose determinants are 1. We say
that is a symmetry group with respect to a dilation matrix M (see [13]) if forms
a group under matrix multiplication and MM 1 2 for all 2 . Obviously, each
element in a symmetry group induces a linear isomorphism on Z d .
6 Bin Han
Let A
d denote the set of all linear transforms on Z d which are given by
and is a permutation on
d is called
the full axes symmetry group. Obviously, A
d is a symmetry group with respect to
the dilation matrix 2I d . It is also easy to check that A
2 is a symmetry group with
respect to the quincunx dilation matrices
and
Another symmetry group with respect to 2I 2 is the following group which is called
the hexagonal symmetry group:
Such a group H can be used to obtain wavelets on the hexagonal planar lattice (that
is, the triangular mesh). For a symmetry group and a sequence u on Z d , we dene
a new sequence (u) as follows:
where # denotes the cardinality of the set . We say that a mask a is invariant
under if (a) = a. Obviously, for any sequence u, (u) is invariant under since
When is a symmetry group with respect to a dilation matrix M ,
then a is invariant under implies that the renable function M
a is also invariant
under ; that is, M
a
a for all 2 . See Han [13] for detailed discussion on
symmetry property of multivariate renable functions. We caution the reader that
the condition MM 1 2 for all 2 cannot be removed in the denition of a
symmetry group with respect to a dilation matrix M . For example, as a subgroup of
A
is not a symmetry group with respect to the quincunx dilation matrices,though it is
a symmetry group with respect to the dilation matrix 2I 2 . So even when a mask a is
invariant under such a group , the renable function M
a with the quincunx dilation
matrices may not be invariant under .
Let Z d
denote a subset of Z d such that for every 2 Z d , there exists a unique
satisfying In other words, Z d
is a complete set of
representatives of the distinct cosets of Z d under the equivalence relation induced by
on Z d .
Taking into account the symmetry of a mask, now we have the following algorithm
for the numerical computation of the important quantity 2 (a; M ).
Algorithm 2.1. Let M be a d d isotropic dilation matrix and let be a
symmetry group with respect to the dilation matrix M . Let a be a mask on Z d such
that
1. Dene the sequence b as in (1.3). Suppose that b is invariant
under the symmetry group and a satises the sum rules of order k but not k+1. The
quantity 2 (a; M), or equivalently, k (a; M; 2), is obtained via the following procedure:
Computing the Smoothness Exponent of a Symmetric Multivariate Renable Function 7
(a) Find a nite subset K of Z d
such that
and
(b) Obtain a (#K ) (#K ) matrix T as follows:
(c) Let (T ) consist of the absolute values of all the eigenvalues of the square matrix T
counting multiplicity of its eigenvalues. Then 2 (a; M) is the smallest number
in the following set
o/
with positive multiplicity
where by default log j det Mj 0 := 1 and
Before we give a proof to Algorithm 2.1, let us make some remarks and discuss
how to compute the set K and the quantities m (j) in Algorithm 2.1. Since the
matrix T in Algorithm 2.1 has a simple structure, it is not necessary to store the whole
matrix T in order to compute its eigenvalues and many techniques from numerical
analysis (such as the subspace iteration method and Arnoldi's method as discussed in
[34]) can be exploited to further improve the e-ciency in computing the eigenvalues
of T . We shall not discuss such an issue here. One satisfactory set K can be easily
obtained as follows:
Proposition 2.2. Let K 0
0g. Recursively compute
\ Z d
. Then K satises all the
conditions in (a) of Algorithm 2.1.
Proof. Note that K j (
integer r. Therefore, there must exist j 2 N such that K
Consequently,
jg. The set O j can be ordered according to the
lexicographic order. That is,
8 Bin Han
order if For a d d matrix A and
which is uniquely determined
by
x
It is easy to verify that S(AB; are all the eigen-values
of A, then ; 2 O j are all the eigenvalues of S(A;
since S(A; j) is similar to S(B; when A is similar to B. Moreover,
by comparing the Taylor series of the same
function e x T Ay and e y T A T x .
The quantities m (j); j 2 N 0 can be computed as follows.
Proposition 2.3. Let be a symmetry group. Then
In particular, when I d 2 , then m (2j
Proof. For 2 N d
be the sequence given by q
that
(#)[(q
are linearly independent, we have
When I d 2 , we observe that
Therefore, since
Note that m (j) depends only on the symmetry group and is independent of
the dilation matrix M . When is a subgroup of the full axes symmetry group A
d ,
then m (j) can be easily determined since the matrix S(; j) is very simple for every
d . For example,
Computing the Smoothness Exponent of a Symmetric Multivariate Renable Function 9
For the convenience of the reader, we list the quantities m (j) in Algorithm 2.1
for some well known symmetry groups in Table 2. In Table 2, the symmetry groups
2 and 2
are dened to be
Table
The quantities m (j); j 2 N 0 in Algorithm 2.1 for some known symmetry groups. Note that
N in this table.
26 28
A
A
A
A
For a sequence u on Z d , its symbol is given by
2Z d
For denote the dierence operator given by
and := 1
d for
2Z d
To prove Algorithm 2.1, we need the following result.
Theorem 2.4. Let a be a nitely supported mask on Z d and let b be the sequence
dened in (1.3). Let be a symmetry group with respect to a dilation matrix M .
Suppose that b is invariant under . Then (T b;M
and
where T b;M is the transition operator dened in (1.4) and W k is the minimal T b;M -
invariant nite dimensional space which is generated by ( -), 2 N d
Proof. Since is a symmetry group with respect to the dilation matrix M and b
is invariant under , for
2Z d
2Z d
2Z d
Therefore, (T b;M
Note that b j. By the Parseval
identity, we have
kr S n
Z
r S n
(2) d
Z
S n
b;M -() d
From the denition of the transition operator, it is easy to verify that
For a sequence u such that b u() > 0 for all 2 R d , we observe that
[11]). From the fact that \
S n
follows that
is the minimal T b;M -invariant subspace generated by
we have
lim
do
lim
do
which completes the proof.
Proof of Algorithm 2.1: Let K Kg. Then it is easy to
check that both '(K) and ('(K)) are invariant under T b;M (see [14, Lemma 2.3]).
Since a satises the sum rules of order k, then the sequence b, which is dened in
(1.3), satises the sum rules of order 2k and V 2k 1 is invariant under T b;M (see [20,
Theorem 5.2]), where
2Z d
Computing the Smoothness Exponent of a Symmetric Multivariate Renable Function 11
1g. Let W k denote the linear space in
Theorem 2.4. Observe that W k U 2k 1 V 2k 1 . By Theorem 2.4 and (1.7), we
have k (a; M;
Since b satises the sum rules of order 2k and b is invariant under , we have
Therefore, we have spec(T b;M j
denotes the set of all the
eigenvalues of T counting multiplicity and the linear space ('(K))=U 2k 1 is a quotient
group under addition. Note that U
the quotient group ('(K))=U 2k 1 is isomorphic to U 1 =U 0
By [25, Theorem 3.2] or by the proof of Theorem 3.1 in Section 3, we know that for any
all the eigenvalues of T b;M
we used the assumption that M is isotropic. Since U j 1 =U j is a subgroup of
we deduce that all the eigenvalues of T b;M
fact, by duality, we can prove that for any
without assuming that M is isotropic, where [ (q)](x) := q(M 1 x); q 2 2k 1 .) By
duality,
Note that f(- is a basis of ('(K)) and the matrix T is the representation
matrix of the linear operator T b;M acting on ('(K)) under the basis
g. This completes the proof.
From the above proof, without the assumption that M is isotropic, we observe
that k (a; M; 2) is the largest number in the set (T )nfjj
where (T ) is dened in Algorithm 2.1 and [ (q)](x) := q(M 1
Since is a symmetry group with respect to the dilation matrix M , it is easy to see
that
g. In passing, we mention that the calculation of the Sobolev smoothness for
a bivariate mask which is invariant under A
2 with the dilation matrix 2I 2 was also
discussed by Zhang in [36]. When a mask has a nonnegative symbol, then we can
also compute k (a; M;1) in a similar way (see [14, Theorem 4.1]). For complete-
ness, we present the following algorithm whose proof is almost identical to that of
Algorithm 2.1.
Algorithm 2.5. Let M be a d d isotropic dilation matrix and let be a
symmetry group with respect to the dilation matrix M . Let a be a mask on Z d such
that
1. Suppose that a is invariant under the symmetry group , the
symbol of a is nonnegative (i.e., ba() > 0 for all 2 R d ), and a satises the sum
rules of order k but not k + 1. The quantity 1 (a; M), or equivalently, k (a; M;1),
is obtained via the following procedure:
(a) Find a nite subset K of Z d
such that
ag \ Z d
and
12 Bin Han
(b) Obtain a (#K ) (#K ) matrix T as follows:
(c) Let (T ) consist of the absolute values of all the eigenvalues of the square matrix
counting multiplicity of its eigenvalues. Then 1 (a; M) is the smallest
number in the following set
with positive multiplicity
Moreover, without the assumption that the symbol of the mask a is nonnegative,
(a; M) is equal to or less than the quantity obtained in (c).
Cohen and Daubechies in [4] discussed how to estimate the smoothness exponent
of a renable function using the Fredholm determinant theory. Matlab routines
for computing smoothness exponents using the method in [25] were developed and
described in [28]. When a mask has a nonnegative symbol, matlab routines for estimating
the Holder smoothness exponent was developed and described in [1] where
symmetry is not taken into account and eigenvectors have to be explicitly computed
and to be checked whether they belong to the subspace V k 1 or not.
3. Relations among k (a; M; p); k 2 N 0 . In this section, we shall study the
relations among k (a; M; p); k 2 N 0 . Using such relations we shall be able to overcome
the di-culty in D3 in Section 1 in order to check the stability condition for certain
renable functions.
The main results in this section are as follows.
Theorem 3.1. Let M be a dilation matrix. Let a be a nitely supported mask
on Z d such that
a satises the sum rules of order k with respect
to the lattice MZ d . Let min := min 16j6d
are all the eigenvalues of M . Then
min
and
for all j 2 N 0 and 1 6 p 6 q 6 1. Consequently,
We say that a mask a is an interpolatory mask with respect to the lattice MZ d
nf0g. Let a and b be two nitely supported masks on Z d .
Dene a sequence c by If c is an interpolatory mask with
respect to the lattice MZ d , then b is called a dual mask of a with respect to the lattice
MZ d and vice versa.
Let be a continuous function on R d . We say that is an interpolating function
nf0g. For discussion on interpolating renable
Computing the Smoothness Exponent of a Symmetric Multivariate Renable Function 13
functions and interpolatory masks, the reader is referred to [7, 8, 9, 15, 16, 30, 31]
and references therein. For a compactly supported function on R d , we say that
the shifts of are linearly independent if for every 2 C d , b
Clearly, if the shifts of are linearly independent, then the shifts of are
stable. When is a compactly supported interpolating function, then the shifts of
are linearly independent since
2Z d
be the renable function
with a nitely supported mask and the dilation matrix 2I d . A method was proposed
in Hogan and Jia [19] to check whether the shifts of are linearly independent or
not. However, there are similar di-culties as mentioned in D1 and D2 in Section 1
when applying such a method in [19]. In fact, the procedure in [19] is not numerically
stable and exact arithmetic is needed. Also see [29] on stability.
An iteration scheme can be employed to solve the renement equation (1.1). Start
with some initial function 0 2 L p (R d ) such that b
nf0g. We employ the iteration scheme Q n
is the
linear operator on L p (R d by
2Z d
(R d
This iteration scheme is called a subdivision scheme or a cascade algorithm ([2, 18]).
When the sequence Q n
converges in the space L p (R d ), then the limit function
must be M
a and we say that the subdivision scheme associated with mask a and
dilation matrix M converges in the L p norm. It was proved in [14] that the subdivision
scheme associated with the mask a and dilation matrix M converges in the L p norm if
and only if 1 (a; M; p) < j det M j 1=p (By Theorem 3.1, we see that this is equivalent to
references therein on convergence of subdivision
schemes.
Let be a renable function with a nitely supported mask a and a dilation
matrix M . It is known that is an interpolating renable function if and only
if the mask a is an interpolatory mask with respect to the lattice MZ d and the
subdivision scheme associated with mask a and dilation M converges in the L1 norm
(equivalently, 1 (a; M;1) < 1, see [14]). However, in general, it is di-cult to directly
check the condition 1 (a; M;1) < 1. On the other hand, in order to check that
is an interpolating renable function with a nitely supported interpolatory mask
a and a d d dilation matrix M , it was known in the literature (for example, see
[1, 29, 31, 34]) that one needs to check the following two alternative conditions: 1)
is a continuous function (Often, one computes the Sobolev smoothness exponent
of to establish that 2 () > d=2 and consequently is a continuous function).
- is the unique eigenvector of the transition operator T a;M j
'(
corresponding
to a simple eigenvalue 1. In the following, we show that if a is an interpolatory
mask and 2 (a; M) > d=2, then 2) is automatically satised. In other words, for
an interpolatory mask a with respect to the lattice MZ d , we show that 2 (a; M) >
d=2 implies 1 (a; M) > 0. Consequently, 1 (a; M;1) < 1 and the corresponding
subdivision scheme converges in the L1 norm and its associated renable function is
indeed an interpolating renable function.
Corollary 3.2. Let a be a nitely supported mask on Z d and M be a dilation
matrix. Suppose that b is a dual mask of a with respect to the lattice MZ d and
14 Bin Han
Then the shifts of M
a are linearly independent and consequently stable. If M is
isotropic and (3.2) holds, then p (a; M) > 0 implies that M
a 2 L p (R d ) and
a
In particular, if 2 (a; M) > d=2 (or more generally p (a; M) > d=p for
and a is an interpolatory mask with respect to the lattice MZ d ,
then the subdivision scheme associated with mask a and dilation M converges in the
L1 norm and consequently M
a is a continuous interpolating renable function.
Proof. Let j. Dene a sequence c by
is an interpolatory mask with respect to the lattice MZ d . By [12, Theorem 5.2] and
using Young's inequality, when 1=p
Note that
and
for some proper integers j and k. Therefore, j+k (c; M;1) 6 p (a;M) q (b;M)
It follows from Theorem 3.1 that 1 (c; M;1) < 1 and therefore, the subdivision
scheme associated with mask c and dilation M converges in the L1 norm. Conse-
quently, M
c is an interpolating renable function and so its shifts are linearly in-
dependent. Note that b
a
(). Therefore, the shifts of M
a must be
linearly independent and consequently stable.
Note that - is a dual mask of an interpolatory mask and for any 1 6 q 6 1,
since . The second part of Corollary 3.2 follows directly from the
rst part. The second part can also be proved directly. Since p (a; M) > d=p, by
Theorem 3.1, we have
for some proper integer k. By Theorem 3.1, we have 1 (a; M;1) < 1. So the
subdivision scheme associated with the mask a and the dilation matrix M converges
in the L1 norm and therefore, we conclude that M
a is a continuous interpolating
renable function.
Let k be a nonnegative integer. We mention that if j (a; M; p) <
for some positive integer j, then one can prove that the mask a must satisfy the sum
rules of order at least k respect to the lattice MZ d .
In order to prove Theorem 3.1, we need to introduce the concept of ' p -norm joint
spectral radius. Let A be a nite collection of linear operators on a nite-dimensional
normed vector space V . We denote kAk the operator norm of A which is dened to
be g. For a positive integer n, A n denotes the
Cartesian power of A:
Computing the Smoothness Exponent of a Symmetric Multivariate Renable Function 15
and for 1 6 p 6 1, we dene
1=p
For any 1 6 p 6 1, the ' p -norm joint spectral radius (see [6, 15, 24] and references
therein on ' p -norm joint spectral radius) of A is dened to be
Let E be a complete set of representatives of the distinct cosets of the quotient group
Z d =MZ d . To relate the quantities k (a; M; p) to the ' p -norm joint spectral radius, we
introduce the linear operator E) on ' 0 (Z d ) as follows:
2Z d
For
Proof of Theorem 3.1: Let m := j det M j. Let K
and
2Z d
Since a satises the sum rules of order k, by [20, Theorem 5.2],
By [14, Theorem 2.5], we have
Note that
g.
For any ; 2 N d
0 such that jj 6 jj < k, we have
2Z d
(M)
2Z d
2Z d
(M)
Note that
(M)
Since a satises the sum rules of order k, we have
2Z d
(M)
2Z d
2Z d
a(M)
(M)
Han
Thus, for ; 2 N d
0 such that jj 6 jj < k, we have
2Z d
a(M)
(M)
2Z d
[r -]()
It is evident that
2Z d
[r -]()
r -;
Therefore,
2Z d
On the other hand, for any jj 6 jj,
2Z d
(M)
2Z d
2Z d
[r -]()
is dened in (2.10). Therefore, we have
0 g is a basis for W j , we have
Note that the spectral radius of S(M
min for all j 2 N. Therefore, we
deduce that
holds.
By the denition of the ' p -norm joint spectral radius, using the Holder inequality,
we have
(see [14]) for all 1 6 p 6 q 6 1. This completes the proof.
Computing the Smoothness Exponent of a Symmetric Multivariate Renable Function 17
4. Some examples of symmetric renable functions. In this section, we
shall give several examples to demonstrate the advantages of the algorithms and
results in Sections 2 and 3 on computing smoothness exponents of symmetric renable
functions.
Example 4.1. Let . The interpolatory mask a for the butter
y scheme
in [9] is supported on [ 3; 3] 2 and is given
Then a satises the sum rules of order 4 and a is invariant under the hexagonal
symmetry group H . By Proposition 2.2, we have #K and by computing
the eigenvalues of the 11 11 matrix T in Algorithm 2.1, we have
log
Let be the renable function with mask a and the dilation matrix 2I 2 . So by Algorithm
2.1, 2 (a; 2I 2 ) 2:44077 > 1. Therefore, by Corollary 3.2, is an interpolating
renable function and 2 2:44077. Note that the matrix size using
the method in [25]
which is much larger than the matrix size
used in Algorithm 2.1.
Example 4.2. Let family of bivariate interpolatory masks RS r (r 2
N) was given in Riemenschneider and Shen [31] (also see Jia [22]) such that RS r is
supported on [1 the sum rules of order 2r with respect to
the lattice 2Z 2 and RS r is invariant under the hexagonal symmetry group H . Using
the fact that the symbol of RS r has the factor [(1
by taking out some of such factors, Jia and Zhang [25, Theorem 4.1] was able to
compute the Sobolev smoothness exponents of r for
the renable function with the mask RS r and the dilation matrix 2I 2 . Note that
the mask RS 16 is supported on [ In fact, in order to compute 2 ( 16 ), the
method in [25, Theorem 4.1] has to compute the eigenvalues of two matrices of size
(without factorization, the matrix size used in [25] is 11719). Without using any
factorization, for any mask a which is supported on [ 31; 31] 2 and is invariant under
H , by Algorithm 2.1, we have #K 992. So, to compute 2 ( 16 ), we only need
to compute the eigenvalues of a matrix of size 992.
Han
Example 4.3. Let
be the quincunx dilation matrix. The interpolatory
mask a is supported on [ 3; 3] 2 and is given
Note that a satises the sum rules of order 4 with respect to the quincunx lattice
MZ 2 and a is invariant under the full axes symmetry group A
2 with respect to the
dilation matrix M . This example was discussed in [25] and belongs to a family of
quincunx interpolatory masks in [16]. Let be the renable function with the mask
a and dilation matrix M . By Algorithm 2.1, we have #K A 2
2:44792 > 1. Therefore, 2 2:44792. Note that the matrix to
compute 2 () using method in [25] has size 481 (see [25]) which is much larger than
the size 46 when using Algorithm 2.1. Note that the symbol of a is nonnegative. By
Algorithm 2.5, we have #K A 2
Therefore, by
Corollary 3.2, 1 using method in [25], the
matrix size is 129 (see [25]) which is much larger than the size 13 in Algorithm 2.5.
Example 4.4. Let
. A family of quincunx interpolatory masks
r (r 2 N) was proposed in [16] such that g r is supported on [ the
sum rules of order 2r with respect to MZ 2 , is an interpolatory mask with respect to
MZ 2 and is invariant under the full axes symmetry group A
. Note that the mask
in Example 4.3 corresponds to the mask g 2 in this family. Since the symbols of g r
are nonnegative, the L1 smoothness exponents 1 ( r ) were computed in [16] for
r is the renable function with mask g r and the dilation matrix
M . Using Algorithm 2.5, we are able to compute 1 in Table 3.
Table
The L1 (Holder) smoothness exponent of the interpolating renable function r whose mask
is gr .
5.71514 6.21534 6.70431 7.18321
7.65242 8.11171 8.56039 8.99752
A coset by coset (CBC) algorithm was proposed in [12, 16] to construct quincunx
biorthogonal wavelets. Some examples of dual masks of g r , denoted by (g r ) s
k , were
constructed in [16, Theorem 5.2] and some of their Sobolev smoothness exponents were
given in Table 4 of [16]. Note that the dual mask (g r ) s
k is supported on [ k
satises the sum rules of order 2k, has nonnegative symbol and is invariant under the
full axes symmetry group A 2 . However, in the paper [16] we are unable to complete
the computation in Table 4 in [16] due to the di-culty mentioned in D2 in Section 1.
In fact, to compute 2 (a; M) for a mask supported on [ k; k] 2 , the
set
b;M dened
Computing the Smoothness Exponent of a Symmetric Multivariate Renable Function 19
in (1.5) is given by
For example, in order to compute 2 ((g 4
set
b;M consists of 16321 points
which is beyond our ability to compute the eigenvalues of a 16321 16321 matrix.
We now can complete the computation using Algorithm 2.1 in Section 2. Note that
the quincunx dilation M here is denoted by Q in Table 4 of [16]. By computation,
and the rest of the computation is given in Table 4.
Table
Computing
by Algorithm 2.1. The result here completes Table 4 of [16].
3.01166 2.92850 2.90251 2.91546
In passing, we mention that if a nitely supported mask a on Z 2 is invariant under
the full axes symmetry group A
proved in Han [13] that all the renable
functions with the mask a and any of the quincunx dilation matrices
are the same function which is also invariant under the full axes symmetry group A
.
Also see [3, 4] on quincunx wavelets. For any primal (matrix) mask and any dilation
matrix, the CBC algorithm proposed in [12] can be used to construct dual (matrix)
masks with any preassigned order of sum rules.
Example 4.5. Let
be the dilation matrix in a
3-subdivision
scheme ([28]). The interpolatory mask a is supported on [ 4; 4] 2 and is given
Note that a satises the sum rules of order 6 with respect to the lattice MZ 2 and
a is invariant under the hexagonal symmetry group H with respect to the dilation
matrix M . By Algorithm 2.1, we have #K
Han
be the renable function with the mask a and the dilation matrix M . Therefore,
by Corollary 3.2, is a C 2 interpolating renable function and 2
3:28036. By estimate, the matrix
b;M using the method in [25] is greater
than 361 which is much larger than the size 38 when using Algorithm 2.1. Note
that the symbol of a is nonnegative. By Algorithm 2.5, we have #K
Therefore, by Corollary 3.2, 1
Using method in [25], the matrix
a;M is greater than 85 which is much larger
than the size 11 in Algorithm 2.5. Since 2 C 2 , this example gives us a C 2
pinterpolatory subdivision scheme.
In the rest of this section, let us present some examples in dimension three. By
generalizing the proof of [15, Theorem 4.3], we have the following result.
Theorem 4.6. Let d be the dilation matrix. For each positive integer r,
there exists a unique dyadic interpolatory mask g d
r in R d with the following properties:
(a) g d
r is supported on the set f2
(b) g d
r is symmetric about all the coordinate axes;
(c) g d
r satises the sum rules of order 2r with respect to the lattice 2Z d .
By the uniqueness, we see that each g d
r in Theorem 4.6 is invariant under the full
axes symmetry group A
d . By the uniqueness of g d
r in Theorem 4.6 again, we see that
were the masks given in [8] and g 2
were the masks proposed in [15].
Moreover, the masks g d
r can be obtained via a recursive formula without solving any
equations.
In the following, let us give some examples of the above interpolatory masks in
dimension three. Let
A 3
Clearly, if a is a mask invariant under the group A
3 , then it is totally determined by
all the coe-cients a(); 2 Z 3
A 3
Example 4.7. The coe-cients of the interpolatory mask g 3
2 on the set Z 3
A 3
are
given by
other 2 Z 3
A 3
Then
2 satises the sum rules of order 4 and there are only 81 nonzero coe-cients
in the mask g 3
. Let be the renable function with the mask g 3
2 and the dilation
matrix 2I 3 . By Algorithm 2.1, we have #K A 3
Therefore, by Corollary 3.2, is an interpolating renable function and 2 ()
2:44077. Note
Algorithm 2.1 can
greatly reduce the size of the matrix to compute 2 (g 3
Example 4.8. The coe-cients of the interpolatory mask g 3
3 on the set Z 3
A 3
are
Computing the Smoothness Exponent of a Symmetric Multivariate Renable Function 21
given by
A 3
Then
3 satises the sum rules of order 6 and it has 171 nonzero coe-cients. Let be
the renable function with the mask g 3
3 and the dilation matrix 2I 3 . By Algorithm 2.1,
we have #K A 3
1:5. Therefore, by Corollary 3.2,
is an interpolating renable function and 2 () 3:17513. Note
and #K A 3
101. Hence, Algorithm 2.1 can greatly reduce the size of the matrix to
compute 2 (g 3
Let r be the renable function with the mask g 3
r (r 2 N) and the dilation matrix
2I 3 . The Sobolev smoothness exponents of r are presented in Table 5.
By [15, Theorem 3.3] and [12, Theorem 5.1], we see that g 3
the optimal Sobolev smoothness and optimal order of sum rules with respect to the
support of their masks. In general, the Algorithms 2.1 and 2.5 roughly reduce the
size of the matrix to be 1=(#) of the number of points
in
b;M in (1.5). Note that
# A
Algorithms 2.1 and 2.5 are very useful in computing
the smoothness exponents of symmetric multivariate renable functions.
Table
The Sobolev smoothness exponent of the renable function r whose mask is
2:44077 3:17513 3:79313 4:34408 4:86202
5:36283 5:85293 6:33522 6:81143 7:28260
Acknowledgments
. The author is indebted to Rong-Qing Jia for discussion on
computing smoothness of multivariate renable functions. The author thanks IMA
at University of Minnesota for their hospitality during his visit at IMA in 2001. The
author also thanks the referees for their helpful comments to improve the presentation
of this paper and for suggesting the references [1, 4, 7, 18].
--R
The IGPM Villemoes Machine
Nonseparable bidimensional wavelet bases
A new technique to estimate the regularity of re
Symmetric iterative interpolation processes
A butter y subdivision scheme for surface interpolation with tension control
Sobolev characterization of solutions of dilation equations
Spectral radius formulas for subdivision operators
Analysis and construction of optimal multivariate biorthogonal wavelets with compact support
Symmetry property and construction of wavelets with a general dilation matrix
Multivariate re
Optimal interpolatory subdivision schemes in multidimensional spaces
Quincunx fundamental re
Multivariate re
Dependence relations among the shifts of a multivariate re
Approximation properties of multivariate wavelets
Characterization of smoothness of multivariate re
Interpolatory subdivision schemes induced by box splines
Spectral analysis of the transition operators and its applications to smoothness analysis of wavelets
Smoothness of multiple re
Spectral properties of the transition operator associated to a multivariate re
On the regularity of matrix re
Multivariate matrix re
On the analysis of p 3-subdivision schemes
Stability and orthonormality of multivariate re
Multidimensional interpolatory subdivision schemes
Simple regularity criteria for subdivision schemes
The Sobolev regularity of re
Computing the Sobolev regularity of re
Wavelet analysis of re
Properties of re
--TR
--CTR
Peter Oswald, Designing composite triangular subdivision schemes, Computer Aided Geometric Design, v.22 n.7, p.659-679, October 2005
Avi Zulti , Adi Levin , David Levin , Mina Teicher, C2 subdivision over triangulations with one extraordinary point, Computer Aided Geometric Design, v.23 n.2, p.157-178, February 2006
Bin Han, Compactly supported tight wavelet frames and orthonormal wavelets of exponential decay with a general dilation matrix, Journal of Computational and Applied Mathematics, v.155 n.1, p.43-67, 01 June
Bin Dong , Zuowei Shen, Construction of biorthogonal wavelets from pseudo-splines, Journal of Approximation Theory, v.138 n.2, p.211-231, February 2006
Bin Han, Vector cascade algorithms and refinable function vectors in Sobolev spaces, Journal of Approximation Theory, v.124 n.1, p.44-88, September | multivariate refinable functions;interpolating functions;eigenvalues of matrices;quincunx dilation matrix;smoothness exponent;regularity;symmetry |
606717 | Sparse approximate inverse smoothers for geometric and algebraic multigrid. | Sparse approximate inverses are considered as smoothers for geometric and algebraic multigrid methods. They are based on the SPAI-Algorithm [MJ. Grote, T. Huckle, SIAM J. Sci. Comput. which constructs a sparse approximate inverse M of a matrix A, by minimizing I - MA in the Frobenius norm. This leads to a new hierarchy of inherently parallel smoothers: SPAI-0, SPAI-1, and SPAI(). For geometric multigrid, the performance of SPAI-1 is usually comparable to that of Gauss-Seidel smoothing. In more difficult situations, where neither Gauss-Seidel nor the simpler SPAI-0 or SPAI-1 smoothers are adequate, further reduction of automatically improves the SPAI() smoother where needed. When combined with an algebraic coarsening strategy [J.W. Ruge, K. Stben, in: S.F. McCormick (Ed.), Multigrid Methods, SIAM, 1987, pp. 73-130] the resulting method yields a robust, parallel, and algebraic multigrid iteration, easily adjusted even by the non-expert. Numerical examples demonstrate the usefulness of SPAI smoothers, both in a sequential and a parallel environment.Essential advantages of the SPAI-smoothers are: improved robustness, inherent parallelism, ordering independence, and possible local adaptivity. | Introduction
Multigrid methods rely on the subtle interplay of smoothing and coarse grid
correction. Only their careful combination yields an e-cient multigrid solver
for large linear systems, resulting from the discretization of partial dierential
equations [7,14,15,27]. Standard smoothers for multigrid usually consist of a
few steps of a basic iterative method. Here we shall consider smoothers that are
based on sparse approximate inverses. Hence, starting from the linear system
we let M denote a sparse approximation of A 1 . The corresponding basic
iterative method is
Since the approximate inverse M is known explicitly, each iteration step requires
only one additional M v matrix-vector multiply; therefore, it is easy
to parallelize and cheap to evaluate, because M is sparse.
Recently, various algorithms have been proposed, all of which attempt to compute
directly a sparse approximate inverse of A [5,9,17]. For a comparative
study of various approximate inverse preconditioners we refer to Benzi and
Tuma [6]. Approximate inverse techniques are also gaining in importance as
smoothers for multigrid methods. First introduced by Benson and Frederickson
[3,4], they were shown to be eective on various di-cult elliptic problems
on unstructured grids by Tang and Wan [25]. Advantages of sparse approximate
inverse smoothers over classical smoothers, such as damped Jacobi,
Gauss-Seidel, or ILU, are inherent parallelism, possible local adaptivity, and
improved robustness.
Here we shall consider sparse approximate inverse (SPAI) smoothers based on
the SPAI-Algorithm by Grote and Huckle [13]. The SPAI-Algorithm computes
an approximate inverse M explicitly by minimizing I MA in the Frobenius
norm. Both the computation of M and its application as a smoother are inherently
parallel. Since an eective sparsity pattern of M is in general unknown a
priori, the SPAI-Algorithm attempts to determine the most promising entries
dynamically. This strategy has proved eective in generating preconditioners
for many di-cult and ill-conditioned problems [1,13,24]. Moreover, it provides
the means for adjusting the smoother locally and automatically, if necessary.
Nevertheless, by choosing an a priori sparsity pattern for M , the computational
cost can be greatly reduced. Possible choices include powers of A or
A > A, as suggested by Huckle [16] or Chow [10]. Hence we shall investigate the
following hierarchy of sparse approximate inverse smoothers: SPAI-0, SPAI-1,
and SPAI("). For SPAI-0 and SPAI-1 the sparsity pattern of M is xed: M is
diagonal for SPAI-0, whereas for SPAI-1 the sparsity pattern of M is that of
A. For SPAI(") the sparsity pattern of M is determined automatically by the
SPAI-Algorithm ([13]); the parameter " controls the accuracy and the amount
of ll-in of M .
As structured geometric grids are di-cult to use with complex geometries,
application code designers often turn to very large unstructured grids. Yet the
lack of a natural grid hierarchy prevents the use of standard geometric multi-
grid. In this context, algebraic multigrid (AMG) is often seen as the most
promising method for solving large-scale problems. The original AMG algo-
rithm, rst introduced in the 1980's by Ruge and Stuben [22], uses the (simple)
Gauss-Seidel iteration as a smoother, but determines the coarse "grid" space
in a sophisticated way to improve robustness of the method. But if the iteration
fails to converge, there is no automatic way to improve on the smoother.
As an alternative, we investigate the usefulness of smoothers based on SParse
Approximate Inverses (SPAI). Not only inherently parallel, their performance
can also easily be adjusted, even by a non-expert. Thus we aim for a more
general and inherently parallel algebraic multigrid method.
In Section 2 we brie
y review the SPAI-Algorithm and show how sparse
approximate inverses are used as smoothers within a multigrid iteration. A
heuristic Green's function interpretation underpins their eectiveness as smoo-
thers. Rigorous results on the smoothing property of approximate inverses were
proved in [8]; they are summarized in Section 2.4. Next, we present in Section
3 a detailed description of the algebraic coarsening strategy used ([22]),
together with key components of the algorithm for e-cient implementation.
Finally, in Section 4, we compare the performance of SPAI smoothing to that
of Gauss-Seidel smoothing on various test problems, either within a geometric
or an algebraic multigrid setting.
smoothing
2.1 Classical smoothers
Consider a sequence of nested grids, T On the nest mesh,
we wish to solve the n n linear system
by a multigrid method { for further details on multigrid see Hackbusch [14]
and ([15], Section 10) or Wesseling [27]. A multigrid iteration results from the
recursive application of a two-grid method. A two-grid method consists of 1
pre-smoothing steps on level ', a coarse grid correction on level ' 1, and 2
post-smoothing steps again on level '. This leads to the iteration
for the error e (m)
and r are prolongation and restriction
operators, respectively, between T ' and T ' 1 , while S ' denotes the smoother. If
nested nite element spaces with Galerkin discretization are used, the Galerkin
product representation holds:
Otherwise, one can still use (4) to dene the coarse-grid problem for given r
and p.
We shall always use x (0)
our initial guess. The multigrid (V-cycle) iteration
proceeds until the relative residual drops below a prescribed tolerance,
kb A ' x (m)
kbk < tol: (5)
Then we calculate the average rate of convergence
1=m
The expected multigrid convergence behavior is achieved if the number of
multigrid iterations, m, necessary to achieve a xed tolerance, is essentially
independent of the number of grid levels '.
Typically, the smoother has the form
where W ' approximates A ' and is cheap to invert | of course, W 1
' is never
computed explicitly. Let with D the diagonal, L the lower
triangular part, and U the upper triangular part of A. Then, damped Jacobi
smoothing corresponds to
whereas Gauss-Seidel smoothing corresponds to
In (8) the parameter ! is chosen to maximize the reduction of the high frequency
components of the error. The optimal value, ! , is problem dependent
and usually unknown a priori. Although Gauss-Seidel typically leads to faster
convergence, it is more di-cult to implement in parallel, because each smoothing
step in (9) requires the solution of a lower triangular system. If neither
Jacobi nor Gauss-Seidel smoothing lead to satisfactory convergence, one can
either resort to more sophisticated (matrix dependent) prolongation and restriction
operators ([29]) or to more robust smoothers, based on incomplete LU
(ILU) factorizations of A ' ([28]). Unfortunately, ILU-smoothing is inherently
sequential and therefore di-cult to implement in parallel. It is also di-cult
to improve locally, say near the boundary or a singularity, without aecting
the ll-in everywhere in the LU factors.
2.2 SPAI-smoothers
As an alternative to inverting W ' in (7), we propose to compute explicitly a
sparse approximate inverse M of A, and to use it for smoothing { we drop the
grid index ' to simplify the notation. This yields the SPAI-smoother,
where M is computed by minimizing kI MAk in the Frobenius norm for
a given sparsity pattern. In contrast to W 1
' in (7), the matrix M in (10) is
computed explicitly. Therefore, the application of S SPAI requires only matrix-vector
multiplications, which are easy to parallelize; it does not require the
solution of any upper or lower triangular systems. Moreover, the Frobenius
norm naturally leads to inherent parallelism because the rows m >
of M can
be computed independently of one another. Indeed, since
the solution of (11) separates into the n independent least-squares problems
for the m >
Here e k denotes the k-th unit vector. Because A and M are sparse, these
least-squares problems are small.
Since an eective sparsity pattern of M is unknown a priori, the original SPAI-
Algorithm in [13] begins with a diagonal pattern. It then augments progressively
the sparsity pattern of M to further reduce each residual r
Each additional reduction of the 2-norm of r k involves two steps. First, the
algorithm identies a set of potential new candidates, based on the sparsity of
A and the current (sparse) residual r k . Second, the algorithm selects the most
protable entries, usually less than ve entries, by computing for each candidate
a cheap upper bound on the reduction in kr k k 2 . Once the new entries
have been selected and added to m k , the (small) least-squares problem (12) is
solved again with the augmented set of indices. The algorithm proceeds until
each row m >
of M satises
where " is a tolerance set by the user; it controls the ll-in and the quality of
the preconditioner M . A larger value of " leads to a sparser and less expensive
approximate inverse, but also to a less eective smoother with a higher number
of multigrid cycles. A lower value of " usually reduces the number of cycles,
but the cost of computing may become prohibitive; moreover,
a denser M results in a higher cost per smoothing step. The optimal value of
" minimizes the total time; it depends on the problem, the discretization, the
desired accuracy, and the computer architecture. Further details about the
original SPAI-Algorithm can be found in [13].
In addition to SPAI("), we shall also consider the following two greatly sim-
plied SPAI-smoothers with xed sparsity patterns: SPAI-0, where M is di-
agonal, and SPAI-1, where the sparsity pattern of M is that of A. Both solve
the least-squares problem (12), and thus minimize kI MAk in the Frobenius
norm for the sparsity pattern chosen a priori. This eliminates the search
for an eective sparsity pattern of M , and thus greatly reduces the cost of
computing the approximate inverse. The SPAI-1 smoother coincides with the
smoother of Tang and Wan [25].
For diagonal and can be calculated directly. It is
simply given by
a kk
where a >
k is the k-th row of A { note that M is always well-dened if A is
nonsingular. In contrast to damped Jacobi, SPAI-0 is parameter-free.
To summarize, we shall consider the following hierarchy of SPAI-smoothers,
which all minimize kI MAk in the Frobenius norm for a certain sparsity
pattern of M .
diagonal and given by (14).
SPAI-1: The sparsity pattern of M is that of A.
SPAI("): The sparsity pattern of M is determined automatically via the SPAI-
Algorithm [13]. Each row m >
of M satises (13) for a given ".
Any of these approximate inverses leads to the smoothing step
We have found that in many situations, SPAI-0 and SPAI-1 yield ample
smoothing. However, the added
exibility in providing an automatic criterion
for improving the smoother via the SPAI-Algorithm remains very useful.
Indeed, both SPAI-0 and SPAI-1 can be used as initial guess for SPAI("), and
thus be locally improved upon where needed by reducing ". For matrices with
inherent (small) block structure, typical from the discretization of systems of
partial dierential equations, the Block-SPAI-Algorithm ([2]) greatly reduces
the cost of computing M .
2.3 Green's function interpretation
Why do approximate inverses yield eective smoothers for problems which
come from partial dierential equations? As the mesh parameter h tends to
zero, the solution of the linear system,
A h u
tends to the solution of the underlying dierential equation,
with appropriate boundary conditions. Here the matrix A h corresponds to a
discrete version of the dierential operator L. Let y h
k denote the k-th row of
solves the linear system
with e h
k the k-th unit vector. As h ! 0, y h
k tends to the Green's function
Here L denotes the adjoint dierential operator and -(x x k ) the \delta-
centered about x k . To exhibit the correspondence between y h
k and
we recall that L is formally dened by the identity
for all u; v in appropriate function spaces. Equation (20) is the continuous
counterpart to the relation
From (17), (19) and (20) we conclude that
Similarly, the combination of (16), (18), and (21) leads to the discrete counterpart
of (22),
Comparison of (22) and (23) shows that y h
k corresponds to to G(x; x k ) as
The k-th row, (m h
of the approximate inverse, M h , solves (12), or equiva-
lently
min
Hence m h
k approximates the k-th column of A > , that is y h
k in (18), in the
(discrete) 2-norm for a xed sparsity pattern of m h
k . The nonzero entries of
usually lie in a neighborhood of x k : they correspond to mesh points x j close
to x k . Therefore, after an appropriate scaling in inverse powers of h, we see
that m h
approximates G(x; x k ) locally in the (continuous) L
1268 3 For
(partial) dierential operators, G(x; x k ) typically is singular at x k and decays
smoothly, but not necessarily rapidly, with increasing distance jx x k j. Clearly
the slower the decay, the denser M h must be to approximate well A 1
deciency of sparse approximate inverse preconditioners was also pointed out
by Tang [24]. At the same time, however, it suggests that sparse approximate
inverses, obtained by the minimization of kI MAk in the Frobenius norm,
naturally yield smoothers for multigrid. Indeed to be eective, a preconditioner
must approximate uniformly over the entire spectrum of L. In contrast,
an eective smoother only needs to capture the high-frequency behavior of
. Yet this high-frequency behavior corresponds to the singular, local
behavior of G(x; x k ), precisely that which is approximated by m h
k .
To illustrate this fact, we consider the standard ve-point stencil of the discrete
Laplacian on a 15 15 grid. In Figure 1 on the following page we compare
A 1 with the Gauss-Seidel approximate inverse, (L and two explicit
approximate inverses, SPAI-1 and SPAI(0:2). We recall that Gauss-Seidel, a
poor preconditioner for this problem, remains an excellent smoother, because
it captures the high-frequency behavior of A 1 . Similarly, SPAI-1 and SPAI(")
yield local operators with, as we shall see, good smoothing property. Despite
the resemblance between the Gauss-Seidel and the SPAI approximate inverses,
we note the one-sidedness of the former, in contrast to the symmetry of the
latter.
Gauss-Seidel,
SPAI (0.2)
Fig. 1. Row 112 of the following operators: A 1 (top left), the Gauss-Seidel inverse
computed with SPAI-1 (bottom left), and M computed
with SPAI(0:2) (bottom right).
2.4 Theoretical Properties
In contrast to the heuristic interpretation of the previous section, we shall
now summarize some rigorous results ([8]) on the smoothing property of the
simplest smoother: SPAI-0.
Multigrid convergence theory rests on two fundamental conditions: the smoothing
property ([15], Denition 10.6.3):
any function with lim
and the approximation property ([15], Section 10.6.3). In general, the smoothing
and approximation properties together imply convergence of the two-grid
method and of the multigrid W-cycle, with a contraction number independent
of the level number '. Moreover, for symmetric positive denite prob-
lems, both conditions also imply multigrid V-cycle convergence independent
of ' { see Hackbusch ([15], Sect. 10.6) for details. The approximation property
is independent of the smoother, S ' ; it depends only on the discretization
the prolongation operator p, and the restriction operator r. In
[15] the approximation property is shown to hold for a large class of discrete
elliptic boundary value problems. For symmetric positive denite problems
the smoothing property usually holds for classical smoothers, such as damped
Jacobi, (symmetric) Gauss-Seidel, and incomplete Cholesky.
In [8] the smoothing property (25) was shown to hold for the SPAI-0 smoother
under reasonable assumptions on the matrix A. More precisely, for A symmetric
and positive denite, the SPAI-0 smoother satises the smoothing property,
either if A is weakly diagonally dominant, or if A has at most seven nonzero
o-diagonal entries per row.
Furthermore, the two diagonal smoothers SPAI-0 and damped Jacobi, with
optimal relaxation parameter ! , lead to identical smoothers for the discrete
Laplacian with periodic boundary conditions in any space dimension
[8]. In this special situation, the parameter-free SPAI-0 smoother automatically
yields a scaling of diag(A), which minimizes the smoothing factor; in
that sense it is optimal. In more general situations, however, both smoothers
dier because of boundary conditions, even with constant coe-cients on an
equispaced mesh. Comparison of these two diagonal smoothers via numerical
experiments showed that SPAI-0 is an attractive alternative to damped Jacobi
[8]. Indeed, SPAI-0 is parameter-free and typically leads to slightly better
convergence rates than damped Jacobi.
3 Algebraic Multigrid
Multigrid (MG) methods are sensitive to the subtle interplay between smoothing
and coarse-grid correction. When a standard geometric multigrid method
is applied to di-cult problems, say with strong anisotropy, this interplay is
disturbed because the error is no longer smoothed equally well in all direc-
tions. Although manual intervention and selection of coarse grids can sometimes
overcome this di-culty, it remains cumbersome to apply in practice to
unstructured grids and complex geometry. In contrast, an algebraic multigrid
approach compensates for the decient smoothing by a sophisticated
choice of the coarser grids and the interpolation operators, which is only based
on the matrix A ' . Many AMG variants exist, which dier in the coarsening
strategy or the interpolation used { an introduction to various AMG methods
can be found in ([26]).
Following Ruge and Stuben [22], we now describe the algebraic coarsening
strategy and interpolation operators, which we shall combine with the SPAI
smoothers from Section 2.2 and use for our numerical experiments.
3.1 Coarsening strategy1020301020300.20.6
x
y
Anisotropic stencil:h 26 6 6 6 6 4
Fig. 2. The error after ve Gauss-Seidel smoothing steps for the problem described
in Section 4.2 on a with 0:01. The smooth error component is aligned with the
anisotropy, which can be read from the stencils.
The fundamental principle underlying the coarsening strategy is based on the
observation that interpolation should only be performed along smooth error
components. For symmetric M-matrices, the error is smoothed well along large
(negative) o-diagonal entries in the matrix A ([23]). Therefore, at each grid
point p, we may identify among neighboring points q good candidates for
interpolation, by comparing the magnitude of the corresponding entries a pq .
This leads to the following relations between the point p and its neighbors q
in the connectivity graph of the matrix A:
Condition Notation Interpretation
a pq max apr <0 ja pr j p ( q p (strongly) depends on q
and a pq 6= 0 or
q (strongly) in
uences p
a pq < weakly depends on q
and a pq 6= 0 or
q weakly in
uences p
The parameter controls the threshold, which discriminates between strong
and weak connections; typically 0:25. With this denition, all positive
o-diagonal entries are necessarily weak. The relations p ( q and p q are
symmetric only if A is symmetric.
Next, we dene the set of dependencies of a point p as
the set of in
uences of a point p as
I
and the set of weak dependencies of a point p as
On every level, the coarsening strategy must divide P , the set of all points
on that level, into two disjoint sets: C, the \coarse points", also present on
the coarser level, and F , the \ne points", which are absent on the coarser
level. The choice of C and F induces the C/F{splitting of
Coarse grid correction heavily depends on accurate interpolation. Accurate
interpolation is guaranteed if every F point is surrounded by su-ciently many
strongly dependent C points. A typical conguration is shown in Figure 3.
q3
strong
dependency
dependency
C point
F point
Strong dependencies are indicated with
solid arrows, while weak dependencies are
represented by dashed arrows. C points
are represented by solid circles, whereas F
points are represented by dashed circles.
Hence all the strong dependencies of point
are C points; therefore q 2 and
q 4 are good candidates for interpolating p.
Fig. 3. Ideal coarsening conguration for interpolation
The coarsening algorithm attempts to determine a C/F{splitting, which maximizes
the F {to{C dependency for all F points (Coarsening Goal 1), with
a minimal set C (Coarsening Goal 2). It is important to strike a good balance
between these two con
icting goals, as the overall computational eort
depends not only on the convergence rate, but also on the amount of work
per multigrid cycle. Clearly the optimal C=F -splitting minimizes total execution
time. However, since the convergence rate is generally unpredictable, the
coarsening algorithm merely attempts to meet Coarsening Goals 1 and 2 in
a heuristic fashion. In doing so, its complexity must not exceed O(n log n) to
retain the overall complexity of the multigrid iteration.
3.2 Coarse grid selection: a greedy heuristic
To split P into C and F , every step of a greedy heuristic moves the most
promising candidate from P into C, while forcing neighboring points into F .
This procedure is repeated until all points are distributed. If every step requires
at most O(log n) operations, and the complexity of all other computations does
not exceed O(n log n), the desired overall complexity of O(n log n) is reached.
The greedy heuristic described in [23] is based on the following two principles,
which correspond to Coarsening Goals 1 and 2:
(1) The most promising candidate, p, for becoming a C point, is that with
the highest number of in
uences jI p j. Then all in
uences of p are added
to F . This choice supports Coarsening Goal 1 because all F points will
eventually have at least one strong C dependency.
(2) To keep the number of C points low (Coarsening Goal 2), the algorithm
should prefer C points near recently chosen F points; hence, these in
u-
ences are given a higher priority.
Starting with all points as \undecided points", that is the algorithm
proceeds by selecting from U the most promising C point with highest priority.
The priority of any point p is dened by
Equation re
ects the preference in choosing the next C point for a point
which in
uences many previously selected F points. The key advantage
of (26) is the possibility to update the priority locally and in O(1) time, which
results in the desired overall complexity of O(n log n). We now summarize the
Coarse Grid Selection algorithm:
Algorithm 1 Coarse Grid Selection
procedure
for all
set Priority(p) := jI p j
end for all
U :=
while U
(1) select p 2 U with maximal P riority(p)
for all q 2 D p (all dependencies of p)
end for all
for all q 2 I p (all in
uences of p)
for all r 2 D q (all dependencies of q)
end for all
end for all
while
procedure
To implement steps (1), (2), and (3) e-ciently in O(1) time, we maintain a
list Q of all points sorted by priority, together with the list I of point indices
of Q. Moreover, a list of boundaries B of all priorities occurring in Q enables
the immediate update of the sorted list Q. Figure 4 shows a possible segment
of the lists Q, B, and I.
5 611 15index of Q
position in Q
priority(p)
position in Q3141312
I
index of B
Fig. 4. The three lists Q, I, and B enable the e-cient implementation of the coarsening
algorithm.
During the set{up phase of the coarsening algorithm, B is computed and
sorted by priority. Step (1) simply chooses the last element of Q. Steps (2)
and (3) are implemented by exchanging a point, whose priority must be either
incremented or decremented, with its left{ or rightmost neighbor in Q with
that same priority; then its priority is adjusted, while Q remains sorted. Both
B and I are updated accordingly. Following a suggestion of K. Stuben, we
shall skip the second pass of the original coarsening algorithm in [22], which
enforces even stronger F {to{C dependency, because of the high computational
cost involved.
3.3 Interpolation
The grid function u h , dened on the ner grid, is interpolated from the grid
function uH , dened on the coarser grid C, as follows:
Hence, values at C points are simply transfered from the coarser level, whereas
values at F points are interpolated from C neighbors. The four dierent dependencies
possible between any F point and its neighbors are shown in Figure 5.
For \standard interpolation" (see [23]), the choice of the weights, w pq , for
interpolating p, is based on the equation
a pp e
a pq e
Indeed, if Ae ' 0, the smoothing eect is minimal, and the error e is declared
\algebraically smooth" { see [23] for details. Clearly, we cannot interpolate p
from surrounding F points, whereas weakly dependent C points are not included
either, because of the rough nature of the error in that direction. Thus
connections (q 1 and q 2 in Figure 5) are always ignored in the interpolation
and the corresponding interpolation weights set to zero (w p;q 1
cancellation of the weak dependencies, the neglected entries of the weakly
dependent neighbors are added to the diagonal. Hence equation (28) becomes
~ a pp e
a pq e
a
Strong C dependencies, such as q 3 in Figure 5, cause no di-culty because the
value of uH is available at that coarse grid location. Division of (29) by ~ a pp
yields the weight
a pq
~ a pp
However, strong F dependencies, such as q 4 in Figure 5, are not available for
interpolation and must rst be interpolated from C points, on which they
strongly depend. To do so, we replace a qq by ~
a qq for every q 2 D q \ F , with
~ a
For every point r 2 D q this yields the weight
a pq
~ a pp
a qr
~ a qq
If a a point q is both a direct and an indirect neighbor of p, so that both (30)
and (32) apply, the two weights are calculated separately and then added to
each other.
q4
strong
dependency
dependency
C point
F point
q3
indirect interpolation
direct interpolation
ignored ignored
Fig. 5. The four dierent dependencies possible between p and its neighbors.
The algorithm described above determines the coarse grid levels only on the
basis of A, and not on that of the approximate inverse M . In fact, the information
contained in M can be used to determine coarse grid levels and
interpolation weights, as suggested by Meurant [19,20].
3.4 Measuring computational costs and memory requirements
When comparing the performance of various smoothers, we cannot limit ourselves
to comparing the number of multigrid iterations, but also need to estimate
the additional amount of work due to the smoother. To do so, we
calculate the total density ratio, M , of nonzero entries in M to those in A on
all grid levels, 1 i ', where smoothing is applied:
The additional amount of work due to the smoother is proportional to M .
While rapidly reducing the number of points from one level to the next, the
matrices A i must also remain reasonably sparse, as measured by
For instance, as Galerkin coarse grid approximation enlarges the standard ve{
point stencil on the nest grid to nine{point stencils on subsequent levels, the
resulting value of A for geometric multigrid is about 1.6. If semi{coarsening
together with one-dimensional interpolation is used, A increases up to two.
All the results presented in the following section were computed with a MATLAB
implementation. We shall evaluate the e-ciency of the various approaches
by comparing their respective values for M and A .
4 Numerical results
To illustrate the usefulness and versatility of SPAI smoothing, we shall now
consider various standard test problems. In all cases, the dierential equation
considered is discretized on the nest level with standard nite dierences
on an equispaced mesh. For geometric multigrid, we use a regularly rened
sequence of equispaced grids, with a single unknown remaining at the center
of the domain. For algebraic multigrid, the coarser levels are obtained by the
Coarse Grid Selection Algorithm described in Section 3.2. With
the denition of strong dependency from section 3.1, the algorithm proceeds
until the number of grid points drops below twenty. The coarse grid operators
are obtained via the Galerkin product formula (4), with . For geometric
multigrid, p correspond to standard linear interpolation, whereas for AMG p
is obtained as described in Section 3.3. We use a multigrid V-cycle iteration,
with two pre- and two post-smoothing steps 2). The multigrid
iteration proceeds until the relative residual satises the prescribed tolerance
in (5), with
4.1 Rotating
ow problem
We rst consider the convection{diusion problem,
in (0; 1)(0; 1), with u(x; on the boundary. Here u represents any scalar
quantity advected by the rotating
ow eld. For convection dominated
<< h, the linear systems cease to be symmetric and positive denite, so
that these problems lie outside of classical multigrid theory. We use centered
second-order nite dierences for the diusion, but discretize the convection
with rst-order upwinding to ensure numerical stability.
Table
Geometric MG convergence rates for the rotating
ow problem on a 128128 grid,
for dierent values of . The symbol y indicates that the multigrid iteration diverges.
Smoother
Gauss-Seidel SPAI-0 SPAI-1 SPAI(0.3) SPAI(0.2)
Table
convergence rates q obtained with standard MG. All
smoothers yield acceptable convergence rates in the diusion dominated case,
with . For however, the multigrid iteration diverges with
Gauss{Seidel or SPAI-0 smoothing. In contrast, the SPAI-1 smoother still
yields a convergent method. The use of SPAI(0.3) smoothing accelerates convergence
even further, while M increases up to 1.4 only.
As we reduce the diusion even further down to only the SPAI(0.2)
smoother yields a convergent iteration. Although the resulting value of M
is quite high, the construction of SPAI(0.2) remains parallel and fully auto-
matic. We remark that symmetric Gauss-Seidel smoothing ([27]) leads to a
convergent multigrid iteration, yet this approach does not generalize easily to
unstructured grids.
Parallel results
Since the SPAI-1 smoother is inherently parallel, it is straightforward to apply
within a parallel version of geometric MG. The data is distributed among
processors via domain decomposition, which is well{known to work e-ciently
for a number of multigrid applications ([18]). The platform we shall use is
the ETH{Beowulf cluster, which consists of 192 dual CPU Pentium III (500
processors. All nodes are connected via a
100 MBit/s and 1 GB/s switched network, while communication is done with
MPI.
We now apply our parallel multigrid implementation to the rotating Flow
Problem (35) with On 128 nodes, the total execution{time is 156
seconds on the 40964096 grid. The time includes the set{up for the construction
of the SPAI-1 smoother, which requires the solution of about sixteen
million small (259) and independent least{squares problems. As shown
in table 1, the use of a coarsest level, which consists only of a single mesh
point, leads to a divergent multigrid iteration for increasing
the resolution of the coarsest level up to 3232 mesh points, one obtains a
convergent multigrid iteration.
Table
Scalability of parallel MG using SPAI-1. The problem size and the number of processors
is increased by a factor of 4, while total time increases by 30% only.
Gridsize 512512 4 10231023
Number of processors
Total time (sec) 20 26
To obtain good speed{up with a parallel MG code, it is important to perform
coarse grid agglomeration (see [21]) because of the loss of e-ciency on coarser
grid levels. Although we have not implemented such an agglomeration strategy,
our computations scale reasonably well as long as the problem size matches
the size of the parallel architecture { see Table 2.
Rotating
ow problem: algebraic
coarsening is clearly aligned
with the
ow direction. Larger dots
correspond to C points on coarser
levels.
(b) Locally anisotropic diusion:
semi-coarsening is apparent in the
center of the domain.
Fig. 6. Examples of algebraic coarsening for the two model problems considered.
AMG results
None of these approaches, however, is entirely satisfactory for vanishing vis-
cosity. To overcome the lack of robustness for small , we now apply the
algebraic coarsening strategy described in Section 3. Figure 6(a) displays the
coarse levels selected by the algorithm. In Table 3 both SPAI-0 and SPAI-1
yield convergence without any particular tuning. 0:5, the SPAI(")
Table
AMG convergence results for the rotating
ow problem for varying on a 128128
grid.
A 2:8 3:4 4:2
Smoother q M q M q M
Gauss{Seidel 0.14 | 0.38 | 0.81 |
Table
AMG convergence rates for the rotating
ow problem with
Smoother
Gauss{Seidel SPAI-0 SPAI-1 SPAI(0.5)
Gridsize A q M q M q M q M
128128 4.2 0.81 | 0.36 (0.1) 0.21 (1.0) 0.27 (0.4)
4.3 0.96 | 0.38 (0.1) 0.24 (1.0) 0.34 (0.4)
smoother yields a compromise between the SPAI-0 and SPAI-1 smoothers:
both the storage requirement and the convergence rate lie between those obtained
with the xed sparsity patterns of SPAI-0 and SPAI-1. Lower values
of " reduce the convergence rate even further. The poor convergence rates
obtained with Gauss-Seidel could probably be improved, either by smoothing
C points before F points ([23]) or by using symmetric Gauss{Seidel.
The results in Table 4 demonstrate the robustness of SPAI smoothing. In-
deed, as ! 0 all convergence rates obtained with the combined SPAI-AMG
approach remain bounded.
4.2 Locally anisotropic diusion
In this section we consider the locally anisotropic problem,
with u(x; on the boundary. The diusion coe-cient (x;
except inside the square [1=4; 3=4] [1=4; 3=4], where (x; y) is
constant. In Table 5 we observe that geometric multigrid has di-culties for
small values . Because of the unidirectional smoothing of the error, aligned
with the strong anisotropy, standard (isotropic) interpolation fails.
Table
Locally anisotropic diusion: geometric MG convergence rates q for varying on a
128 128 grid.
Smoother q M q M q M
Gauss{Seidel
AMG results
AMG overcomes these di-culties by performing automatic semi{coarsening
and operator dependent interpolation only in the direction of strong couplings,
which correspond to smooth error components. It is well{known (e.g. [23])
that AMG solves such problems with little di-culty. The results in Table 6
verify this fact for acceptable densities A and M . Both
densities could be lowered even further by dropping the smallest entries in the
interpolation operators ([23]); we do not consider such truncated grid transfer
operators here.
Table
AMG convergence results for locally anisotropic diusion on a 128 128 grid. Note
that q, A , and M remain bounded as ! 0.
A 2.84 2.94 2.94
Smoother q M q M q M
Gauss{Seidel 0.14 | 0.18 | 0.18 |
The convergence rates obtained with Gauss-Seidel and SPAI-1 are comparable
and both below 0.2, while SPAI-0 results in slightly slower convergence.
Overall the SPAI-1 smoother is the most e-cient smoother for this problem.
Although further reduction of " results in even faster convergence, the approximate
inverses become too dense and thus too expensive. Again, the results in
Tables
6 and 7 demonstrate robust multigrid behavior, either as h ! 0, or as
Table
AMG convergence rates q for locally anisotropic diusion, with . Note that
q; A , and M remain bounded as
Gridsize 6464 128128 256256
A 2.89 2.94 2.95
Gauss-Seidel 0.12 | 0.18 | 0.22 |
Concluding remarks
Our results show that sparse approximate inverses, based on the minimization
of the Frobenius norm, provide an attractive alternative to classical Jacobi
or Gauss-Seidel smoothing. The simpler smoothers, SPAI-0 and SPAI-1, often
provide ample smoothing, comparable to damped Jacobi or Gauss-Seidel. Nev-
ertheless, situations such as convection dominated rotating
ow, where SPAI-1
leads to a convergent multigrid iteration, unlike Gauss-Seidel, demonstrate the
improved robustness. Our implementation of geometric multigrid combined
with SPAI-1 smoothing enables the fast solution of very large convection-
diusion problems on massively parallel architectures. By incorporating the
SPAI smoothers into AMG ([22]), we obtain a
exible, parallel, and algebraic
multigrid method, easily adjusted to the underlying problem and computer
architecture, even by the non-expert.
It is very interesting to incorporate information available from the approximate
inverses into the coarsening strategy and grid transfer operators, as suggested
in [19,20]. The expected benet would include improved robustness and local
adaptivity for these multigrid components as well. The authors are currently
pursuing these issues and will report on them elsewhere in the near future.
Acknowledgment
We thank Klaus Stuben for useful comments and suggestions.
--R
An MPI implementation of the SPAI preconditioner on the T3E
A block version of the SPAI preconditioner
Iterative solution of large sparse linear systems arising in certain multidimensional approximation problems
Frequency domain behavior of a set of parallel multigrid smoothing operators
A sparse approximate inverse preconditioner for the conjugate gradient method
A comparative study of sparse approximate inverse preconditioners
Approximate inverse preconditioners via sparse-sparse iterations
A priori sparsity patterns for parallel sparse approximate inverse preconditioners
Robustness and scalability of algebraic multigrid
Parallel preconditioning with sparse approximate inverses
Iterative Solution of Large Sparse Systems of Equations
Approximate sparsity patterns for the inverse of a matrix and preconditioning
Factorized sparse approximate inverse preconditionings: I.
Numerical experiments with algebraic multilevel preconditioners
A multilevel AINV preconditioner
Parallel adaptive multigrid
Toward an e
Sparse approximate inverse smoother for multi- grid
Introduction to algebraic multigrid
An Introduction to Multigrid Methods
On the robustness of ILU-smoothing
Matrix prolongations and restrictions in a black-box multigrid solver
--TR
On the robustness of Ilu smoothing
Matrix-dependent prolongations and restrictions in a blackbox multigrid solver
Multigrid methods on parallel computersMYAMPERSANDmdash;a survey of recent developments
Factorized sparse approximate inverse preconditionings I
A Sparse Approximate Inverse Preconditioner for the Conjugate Gradient Method
Parallel Preconditioning with Sparse Approximate Inverses
Approximate Inverse Preconditioners via Sparse-Sparse Iterations
Approximate sparsity patterns for the inverse of a matrix and preconditioning
A comparative study of sparse approximate inverse preconditioners
Toward an Effective Sparse Approximate Inverse Preconditioner
A Priori Sparsity Patterns for Parallel Sparse Approximate Inverse Preconditioners
Robustness and Scalability of Algebraic Multigrid
Sparse Approximate Inverse Smoother for Multigrid
Robust Parallel Smoothing for Multigrid Via Sparse Approximate Inverses
Coarse-Grid Selection for Parallel Algebraic Multigrid
--CTR
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | robust smoothing;algebraic multigrid;approximate inverses;parallel multigrid |
606722 | A parallel solver for large-scale Markov chains. | We consider the parallel computation of the stationary probability distribution vector of ergodic Markov chains with large state spaces by preconditioned Krylov subspace methods. The parallel preconditioner is obtained as an explicit approximation, in factorized form, of a particular generalized inverse of the generator matrix of the Markov process. Graph partitioning is used to parallelize the whole algorithm, resulting in a two-level method.Conditions that guarantee the existence of the preconditioner are given, and the results of a parallel implementation are presented. Our results indicate that this method is well suited for problems in which the generator matrix can be explicitly formed and stored. | Introduction
Discrete Markov chains with large state spaces arise in many applications,
including for instance reliability modeling, queueing network analysis, large
scale economic modeling and computer system performance evaluation. The
stationary probability distribution vector of an ergodic Markov process with
n \Theta n transition probability matrix P is the unique 1 \Theta n vector - which
satisfies
Letting the computation of the stationary vector
reduces to finding a nontrivial solution to the homogeneous linear system
0. The ergodicity assumption means that P (and therefore A) is irreducible.
Perron-Frobenius theory [9] guarantees that A has rank n \Gamma 1, and that the
(one-dimensional) null space N (A) of A is spanned by a vector x with positive
entries. Upon normalization in the ' 1 -norm, this is the stationary distribution
vector of the Markov process.
The coefficient matrix A is a singular M-matrix, called the generator of the
Markov process. 3 The matrix A is nonsymmetric, although it is sometimes
structurally symmetric. See [26] for a good introduction to Markov chains and
their numerical solution.
Due to the very large number n of states typical of many real-world applica-
tions, there has been increasing interest in recent years in developing parallel
algorithms for Markov chain computations; see [2], [5], [10], [17], [19], [24].
Most of the attention so far has focused on (linear) stationary iterative meth-
ods, including block versions of Jacobi and Gauss-Seidel [10], [19], [24], and on
iterative aggregation/disaggregation schemes specifically tailored
to stochastic matrices [10], [17]. In contrast, little work has been done with
parallel preconditioned Krylov subspace methods. Partial exceptions are [5],
where a symmetrizable stationary iteration (Cimmino's method) was accelerated
using conjugate gradients on a Cray T3D, and [19], where an out-of-core,
parallel implementation of Conjugate Gradient Squared (with no precondi-
tioning) was used to solve very large Markov models with up to 50 million
states. The suitability of preconditioned Krylov subspace methods for solving
Markov models has been demonstrated, e.g., in [25], although no discussion
of parallelization issues is given there.
In this paper we investigate the use of a parallel preconditioned iterative
method for large, sparse linear systems in the context of Markov chain com-
Strictly speaking, the generator matrix is We work with A
instead of Q to conform to the familiar notation of numerical linear algebra.
putations. The preconditioning strategy is a two-level method based on sparse
approximate inverses, first introduced in [3]. However, due to the singularity
of the generator matrix A, the applicability of approximate inverse techniques
in this context is not obvious. That this is indeed possible is a consequence of
the fact that A is a (singular) M-matrix.
The paper is organized as follows. In section 2 we discuss the problem of pre-conditioning
singular equations in general, and we establish a link between
some standard preconditioners and generalized inverses. Sections 3-5 are devoted
to AINV preconditioning for Markov chain problems, including a discussion
of the parallel implementation and a theoretical analysis of the existence
of the preconditioner. Numerical tests are reported in section 6, and some
conclusions are presented in section 7.
Preconditioning Markov chain problems
In the Markov chain context, preconditioning typically amounts to finding
an easily invertible nonsingular matrix M (the preconditioner) which is a
good approximation to A; a Krylov subspace method is then used to solve
preconditioning). Notice that even if A itself is singular, the preconditioner
must be nonsingular so as not to change the solution set, i.e., the null space
(A) of A. Preconditioners can be generated by means of splittings
N , such as those used in stationary iterative methods including Jacobi, Gauss-
Seidel, SOR and block versions of these schemes; see [26]. Also in this class
are the popular incomplete LU (ILU) factorization preconditioners. ILU-type
methods have been successfully applied to Markov chain problems by Saad
[25] in a sequential environment. The existence of incomplete factorizations
for nonsingular M-matrices was already proved in [20]; an investigation of the
existence of ILU factorizations for singular M-matrices can be found in [11].
Incomplete factorization methods work quite well on a wide range of problems,
but they are not easily implemented on parallel computers. For this and other
reasons, much effort has been put in recent years into developing alternative
preconditioning strategies that have natural parallelism while being comparable
to ILU methods in terms of robustness and convergence rates. This work
has resulted in several new techniques known as sparse approximate inverse
preconditioners; see [7] for a recent survey and extensive references. Sparse
approximate inverse preconditioners are based on directly approximating the
inverse of the coefficient matrix A with a sparse matrix G - A \Gamma1 . The application
of the preconditioner only requires matrix-vector products, which
are easily parallelized. Until now, these techniques have been applied almost
exclusively to nonsingular systems of equations b. The only exception
seems to be [13], where the SPAI preconditioner [16] was used in connection
with Fast Wavelet Transform techniques on singular systems stemming from
discretizations of the Neumann problem for Poisson's equation.
The application of approximate inverse techniques in the singular case raises
several interesting theoretical and practical questions. Because the inverse of
A does not exist, it is not clear what matrix G is an approximation of. It
should presumably be some generalized inverse of A, but which one? Note
that this question can be asked of M \Gamma1 for any preconditioner M - A. In
[26], page 143, it is stated that M \Gamma1 should be an approximation of the group
generalized inverse A ] , and that an ILU factorization A -
U implicitly yields
such an approximation: ( -
. As we will see, this interpretation is not
entirely correct and is somewhat misleading. The group inverse (see [12]) is
only one of many possible generalized inverses. It is well known [21] that the
group inverse plays an important role in the modern theory of finite Markov
chains. However, it is seldom used as a computational tool, in part because
its computation requires knowledge of the stationary distribution vector -.
As it turns out, different preconditioners result (implicitly or explicitly) in
approximations M \Gamma1 to different generalized inverses of A, which are typically
not the group inverse A ] . Let us consider ILU preconditioning first. If A is a n\Theta
irreducible, singular M-matrix, then A has the LDU factorization
where L and U are unit lower and upper triangular matrices (respectively) and
D is a diagonal matrix of rank
(see [26]). Notice that L and U are nonsingular M-matrices; in particular, L \Gamma1
and U \Gamma1 have nonnegative entries. Define the matrix
It can be easily verified that A \Gamma satisfies the first two of Penrose's four conditions
[12]:
The first identity states that A \Gamma is an inner inverse of A and the second
that A \Gamma is an outer inverse of A. A generalized inverse satisfying these two
conditions is called a (1; 2)-inverse of A or an inner-outer inverse. Another
term that is found in the literature is reflexive inverse; see [12]. Because A \Gamma
does not necessarily satisfy the third and fourth Penrose conditions, it is not
the Moore-Penrose pseudoinverse A y of A in general. Because A y is obviously
a (1; 2)-inverse, this kind of generalized inverse is non-unique. Indeed, there
are infinitely many such (1; 2)-inverses in general. Each pair R, N of subspaces
of IR n that are complements of the null space and range of A (respectively)
uniquely determines a (1; 2)-inverse GN;R of A with null space N (GN;R
and range R(GN;R see [12]. In the case of A \Gamma it is readily verified that
denotes the i-th unit
basis vector in IR n . It is easy to see that R is complementary to N (A) and N
is complementary to R(A). The pseudoinverse A y corresponds to
The (1; 2)-inverse A \Gamma is also different from the group inverse A ] , in general.
This can be seen from the fact that in general AA \Gamma 6= A \Gamma A, whereas the group
inverse always satisfies AA A. Also notice that for a singular irreducible
M-matrix A the (1; 2)-inverse A nonnegative ma-
trix, which is not true in general for either the group or the Moore-Penrose
Let now A -
U be an incomplete LDU factorization of A, with -
unit lower triangular, -
U - U unit upper triangular and -
diagonal matrix with positive entries on the main diagonal. Then clearly
Hence, an ILU factorization of A yields an implicit approximation to A \Gamma rather
than to A ] . This can also be seen from the fact that ( -
always nonnegative, which is not true of A ] .
It is straightfoward to check that A \Gamma A is the oblique projector onto
along N (A) and that AA \Gamma is the oblique projector onto
along g. Therefore A \Gamma A has eigenvalues 0 with multiplicity
1, and 1 with multiplicity likewise for AA \Gamma . Hence, it makes good sense
to construct preconditioners based on approximating A \Gamma (either implicitly or
explicitly), since in this case most eigenvalues of the preconditioned matrix
will be clustered around 1.
Next we consider the approximate inverse preconditioner AINV; see [4], [6].
This method is based on the observation that if Z and W are matrices whose
columns are A-biorthogonal, then W T diagonal matrix. When all
the leading principal minors of A (except possibly the last one) are nonzeros,
Z and W can be obtained by applying a generalized Gram-Schmidt process
to the unit basis vectors e 1 . In this case Z and W are unit upper
triangular. It follows from the uniqueness of the LDU factorization that
U \Gamma1 and LDU is the LDU factorization of A. The
diagonal matrix D is the same in both factorizations. An approximate inverse
in factorized form
W T can be obtained by dropping small entries
in the course of the generalized Gram-Schmidt process. Similar to ILU, this
incomplete inverse factorization is guaranteed to exist for nonsingular M -
matrices [4]; see the next section for the singular M-matrix case. In either
case,
W T is a nonnegative matrix. Hence, this preconditioner can
be interpreted as a direct (explicit) approximation to the (1; 2)-inverse A \Gamma of
Lastly, we take a look at sparse approximate inverse techniques based on
Frobenius norm minimization; see, e.g., [16] and [14]. With this class of meth-
ods, an approximate inverse G is computed by minimizing the functional
subject to some sparsity constraints. Here jj \Delta jj F denotes
the Frobenius matrix norm. The sparsity constraints could be imposed
a priori, or dynamically in the course of the algorithm. In either case, it is natural
to ask what kind of generalized inverse is being approximated by G when
A is a singular matrix. It can be shown that the Moore-Penrose pseudoinverse
A y is the matrix of smallest Frobenius norm that minimizes jjI \Gamma AXjj F ;
see for instance [23], page 428. Hence, in the singular case the SPAI preconditioner
can be seen as a sparse approximate Moore-Penrose inverse of A.
This is generally very different from the approximate (1; 2)-inverses obtained
by either ILU or AINV. For instance, SPAI will not produce a nonnegative
preconditioner in general.
In the next section we restrict our attention to the AINV preconditioner and
its application to Markov chain problems.
3 The AINV method for singular matrices
The AINV preconditioner [4], [6] is based on A-biorthogonalization. This is
a generalized Gram-Schmidt process applied to the unit basis vectors e i ,
n. In this generalization the standard inner product is replaced by
the bilinear form h(x; Ay. This process is well defined, in exact arith-
metic, if the leading principal minors of A are nonzero, otherwise some form
of pivoting (row and/or column interchanges) may be needed. If A is a nonsingular
M-matrix, all the leading principal minors are positive and the process
is well defined with no need for pivoting. This is perfectly analogous to the LU
factorization of A, and indeed in exact arithmetic the A-biorthogonalization
process computes the inverses of the triangular factors of A. When A is a
singular irreducible M-matrix, all the leading principal minors of A except
the n-th one (the determinant of are positive, and the process can still be
completed.
In order to obtain a sparse preconditioner, entries (fill-ins) in the inverse factors
Z and W less than a given drop tolerance in magnitude are dropped in the
course of the computation, resulting in an incomplete process. The stability
of the incomplete process for M-matrices was analyzed in [4]. In particular, if
d i denotes the i-th pivot, i.e., the i-th diagonal entry of -
D in the incomplete
process, then (Proposition 3.1 in [4]) -
Because
for an M-matrix, no pivot breakdown can occur.
Exactly the same argument applies to the case where A is an irreducible,
singular M-matrix. In this case there can be no breakdown in the first
steps of the incomplete A-biorthogonalization process, since the first
leading principal minors are positive, and the pivots in the incomplete process
cannot be smaller than the exact ones. And even if the n-th pivot -
happened
to be zero, it could simply be replaced by a positive number in order to have
a nonsingular preconditioner. The argument in [4] shows that -
d n must be a
nonnegative number, and it is extremely unlikely that it will be exactly zero
in the incomplete process.
Another way to guarantee the nonsingularity of the preconditioner is to perturb
the matrix A by adding a small positive quantity to the last diagonal
entry. This makes the matrix a nonsingular M-matrix, and the incomplete
A-biorthogonalization process can then be applied to this slightly perturbed
matrix to yield a well defined, nonsingular preconditioner. In practice, how-
ever, this perturbation is not necessary, since dropping in the factors typically
has an equivalent effect (see Theorem 3 below).
The AINV preconditioner has been extensively tested on a variety of symmetric
and nonsymmetric problems in conjunction with standard Krylov subspace
methods like conjugate gradients (for symmetric positive definite matrices)
and GMRES, Bi-CGSTAB and TFQMR (for unsymmetric problems). The
preconditioner has been found to be comparable to ILU methods in terms
of robustness and rates of convergence, with ILU methods being somewhat
faster on average on sequential computers. The main advantage of AINV over
the ILU-type methods is that its application within an iterative process only
requires matrix-vector multiplies, which are much easier to vectorize and to
parallelize than triangular solves.
Unfortunately, the computation of the preconditioner using the incomplete A-
biorthogonalization process is inherently sequential. One possible solution to
this problem, adopted in [8], is to compute the preconditioner sequentially on
one processor and then to distribute the approximate inverse factors among
processors in a way that minimizes communication costs while achieving good
load balancing. This approach is justified in applications, like those considered
in [8], in which the matrices are small enough to fit on the local memory of
one processor, and where the preconditioner can be reused a number of times.
In this case the time for computing the preconditioner is negligible relative
to the overall costs. In the Markov chain setting, however, the preconditioner
cannot be reused in general and it is imperative that set-up costs be minimized.
Furthermore, Markov chain problems can be very large, and it is desirable to
be able to compute the preconditioner in parallel.
4 The parallel preconditioner
In the present section we describe how to achieve a fully parallel precon-
ditioner. The strategy used to parallelize the preconditioner construction is
based on the use of graph partitioning. This approach was first proposed in
[3] in the context of solving nonsingular linear systems arising from the discretization
of partial differential equations.
The idea can be illustrated as follows. If p processors are available, graph
partitioning can be used to decompose the adjacency graph associated with
the sparse matrix A (or with A A is not structurally symmetric) in
p subgraphs of roughly equal size in such a way that the number of edge
cuts is approximately minimized. Nodes which are connected by cut edges
are removed from the subgraphs and put in the separator set. By numbering
the nodes in the separator set last, a symmetric permutation Q T AQ of A is
obtained. The permuted matrix has the following structure:
The diagonal blocks A correspond to the interior nodes in the graph
decomposition, and should have approximately the same order. The off-diagonal
blocks connections between the subgraphs, and the diagonal
block A S the connections between nodes in the separator set. The order of
A S is equal to the cardinality of the separator set and should be kept as small
as possible. Note that because of the irreducibility assumption, each block A i
is a nonsingular M-matrix, and each of the LDU factorizations A
LDU be the LDU factorization of Q T AQ. Then it is easy to see
that
S is the inverse of the unit lower
triangular factor of the Schur complement matrix
In the next section we show that S is a singular, irreducible M-matrix, hence
it has a well defined LDU factorization
Likewise,
U
S is the inverse of the unit upper
triangular factor of S. It is important to observe that L \Gamma1 and U \Gamma1 preserve
a good deal of sparsity, since fill-in can occur only within the nonzero blocks.
The matrix D is simply defined as
note that all diagonal entries of D are positive except for the last one, which
is zero. The (1; 2)-inverse D \Gamma of D is defined in the obvious way.
Hence, we can write the (generally dense) generalized inverse (Q T AQ) \Gamma of
as a product of sparse matrices L In practice, however,
the inverse factors L \Gamma1 and U \Gamma1 contain too many nonzeros. Since we are only
interested in computing a preconditioner, we just need to compute sparse
approximations to L \Gamma1 and U \Gamma1 .
This is accomplished as follows. With graph partitioning, the matrix is distributed
so that processor P i holds A One of the
processors, marked as P S , should also hold A S . Each processor then computes
sparse approximate inverse factors -
W i such that -
using the AINV algorithm. Once this is done, each processor computes the
product
. Until this point the computation
proceeds in parallel with no communication. The next step is the accumulation
of the approximate Schur complement -
This accumulation is done in steps with a fan-in across the
processors. In the next section we show that although the exact Schur complement
S is singular, the approximate Schur complement -
S is a nonsingular
M-matrix under rather mild conditions.
As soon as -
S is computed, processor P S computes a factorized sparse approximate
using the AINV algorithm. This is a sequential
bottleneck, and explains why the size of the separator set must be kept small.
Once the approximate inverse factors of -
are computed, they are broadcast
to all remaining processors. (Actually, the preconditioner application can be
implemented in such a way that only the -
needs to be broadcast.)
Notice that because only matrix-vector products are required in the application
of the preconditioner, there is no need to
explicitly. In this way, a factorized sparse approximate
(1; 2)-inverse of Q T AQ is obtained.
This is a two-level preconditioner, in the sense that the computation of the
preconditioner involves two phases. In the first phase, sparse approximate inverses
of the diagonal blocks A i are computed. In the second phase, a sparse
approximate inverse of the approximate Schur complement -
S is computed.
this second step the preconditioner would reduce to a block Jacobi
method with inexact block solves (in the terminology of domain decomposition
methods, this is additive Schwarz with inexact solves and no overlap).
It is well known that for a fixed problem size, the rate of convergence of
this preconditioner tends to deteriorate as the number of blocks (subdomains)
grows. Hence, assuming that each block is assigned to a processor in a parallel
computer, this method would not be scalable. However, the approximate
Schur complement phase provides a global exchange of information across the
processors, acting as a "coarse grid" correction in which the "coarse grid"
nodes are interface nodes (i.e., they correspond to vertices in the separator
set). As we will see, this prevents the number of iterations from growing as
the number of processors grows. As long as the cardinality of the separator
set is small compared to the cardinality of the subdomains (subgraphs), the
algorithm is scalable in terms of parallel efficiency. Indeed, in this case the
application of the preconditioner at each step of a Krylov subspace method
like GMRES or Bi-CGSTAB is easily implemented in parallel with relativeley
little communication needed.
5 The approximate Schur complement
In this section we investigate the existence of the approximate (1; 2)-inverse
of the generator matrix A. The key role is played by the (approximate) Schur
complement.
First we briefly review the situation for the case where A is a nonsingular
M-matrix. Assume A is partitioned as
A 21 A 22
Then it is well-known that the Schur complement
is also a nonsingular M-matrix; see, e.g., [1]. Moreover, the same is true of
any approximate Schur complement
provided that O - X 11 - A \Gamma1
, where the inequalities hold componentwise;
see [1], page 264.
In the singular case, the situation is slightly more complicated. In the following
we will examine some basic properties of the exact Schur complement of a
singular, irreducible M-matrix A corresponding to an ergodic Markov chain.
Recall that is the irreducible row-stochastic transition
probability matrix.
be an irreducible row-stochastic matrix partitioned as
Assume that
A 21 A 22
Then the Schur complement S of A 11 in A is a singular, irreducible M-matrix
with a one-dimensional null space.
Proof: Consider the stochastic complement [22] \Sigma of P 22 in
Note that I \Gamma P 11 is invertible since P is irreducible. From the theory developed
in [22], we know that \Sigma is row stochastic and irreducible since P is. Consider
now the Schur complement S of A 11 in A:
22
Clearly, S is an irreducible singular M-matrix. It follows (Perron-Frobenius
theorem) that S has a one-dimensional null space. 2
The previous lemma is especially useful in cases where the exact Schur complement
is used. In the context of preconditioning it is often important to
know properties of approximate Schur complements. As shown in the previous
section, graph partitioning induces a reordering and block partitioning of
the matrix A in the form (1) where
and
We are interested in properties of the approximate Schur complement -
obtained
by approximating the inverses of the diagonal blocks A i with AINV:
In particular, we are interested in conditions that guarantee that -
S is a non-singular
M-matrix, in which case the AINV algorithm can be safely applied to
S, resulting in a well defined preconditioner. We begin with a lemma. Recall
that a Z-matrix is a matrix with nonpositive off-diagonal entries [9].
Lemma 2 Let S be a singular, irreducible M-matrix. Let C - O; C 6= O be
such that -
S is a nonsingular M-matrix.
Proof: Since S is a singular M-matrix, we have
where ae(B) denotes the spectral radius of B. Let
is a Z-matrix, we have that B - C \Gamma D. Therefore we can write the modified
S as -
We distinguish the two following simple cases: Assume first that
S is a nonsingular M-matrix since it can be written as ae(B)I \Gamma (B \Gamma C) with
C). The last inequality follows from the irreducibility of B
and properties of nonnegative matrices; see [9], page 27, Cor. 1.5 (b). Assume,
on the other hand, that
B. Note that by
assumption, at least one of the diagonal entries d ii of D must be positive. Let
denote the largest such positive diagonal entry. Since B + ffiI is irreducible,
similar to the previous case. It follows
that -
D) is a nonsingular M-matrix.
Finally, if both D 6= O and C 6= D the result follows by combining the two
previous arguments. 2
In the context of our parallel preconditioner, this lemma says that if the inexactness
in the approximate inverses of the diagonal blocks A i results in an
approximate Schur complement -
S that is still a Z-matrix and furthermore if
nonnegative and nonzero, then -
S is a nonsingular M-matrix.
The following theorem states sufficient conditions for the nonsingularity of the
approximate Schur complement. B i and B \Lambdaj denote the i-th row and the j-th
column of matrix B, respectively.
Theorem 3 Let
A 21 A 22
be a singular irreducible M-matrix, with A 11 2 IR m\Thetam . Assume that ~
A
11 is an
approximation to A \Gamma1
11 such that O - ~
A
A
11 . Furthermore,
assume that there exist indices such that the following three
conditions are satisfied:
A
the approximate Schur complement -
~
A
A 12 is a nonsingular M-matrix
Proof: From Lemma 1 we know that the exact Schur complement S of A 11
in A is a singular, irreducible M-matrix. Note that the approximate Schur
complement -
S induced by an approximation ~
A
11 of the block A \Gamma1
11 and the
exact Schur complement S are related as follows:
~
A
A \Gamma1)A
with C - O since A 21 - O; A 12 - O; and A \Gamma1
A
S is a
Z-matrix by its definition. From the assumption that ~
A \Gamma16= A \Gamma1and that
there exists at least one nonzero entry ff ij of ~
A
11 not equal to (in fact, strictly
less than) the corresponding entry of A \Gamma1
11 , we see that if the corresponding
row i of A 21 and column j of A 12 are both nonzero, there is a nonzero entry
in C: The result now follows from Lemma 2. 2
Let us now apply these results to the preconditioner described in the previous
section. In this case, A 11 is block diagonal and the AINV algorithm is used
to approximate the inverse of each diagonal block separately, in parallel. The
approximate Schur complement (2) is the result of subtracting p terms of the
We refer to these terms as (Schur
complement) updates. Each one of these updates is nonnegative and approximates
the exact update C
from below in the (entrywise) nonnegative
ordering, since O -
see [6]. Theorem 3 says that as long
as at least one of these updates has an entry that is strictly less than the
corresponding entry in C
, the approximate Schur complement -
S is a
nonsingular M-matrix.
In practice, these conditions are satisfied as a result of dropping in the approximate
inversion of the diagonal blocks A i . It is nevertheless desirable to have
rigorous conditions that ensure nonsingularity. The following theorem gives a
sufficient condition for having a nonsingular approximate Schur complement
as a consequence of dropping in AINV. Namely, it specifies conditions under
which any dropping forces -
S to be nonsingular. Note that the conditions of
this theorem do not apply to the global matrix (1), since A 11 is block diagonal
and therefore reducible. However, they can be applied to any individual
Schur complement update for which the corresponding diagonal block A i is
irreducible, making the result fairly realistic.
Theorem 4 Let A 11 2 IR m\Thetam and the singular M-matrix
A 21 A 22
be both irreducible. Let A be the LU factorization of A 11 . Assume
that in each column of L 11 (except the last one) and in each row of U 11 (except
the last one) there is at least one nonzero entry in addition to the diagonal
one:
and
Denote by -
Z 11
11 the factorized sparse approximate inverse of A 11
obtained with the AINV algorithm. Then the approximate Schur complement
S is a nonsingular M-matrix provided that -
Z 11 6= Z 11 and -
Proof: First note that the two conditions for nonzero entries in L 11 and U 11 are
implied by similar conditions for the entries in the lower and upper triangular
parts of A 11 . Namely, it is easy to see that (barring fortuitous cancellation)
and for 0:
Here tril(B) and triu(B) denote the lower and upper triangular part of matrix
B, respectively. These conditions are easier to check than the weaker ones on
the triangular factors of A 11 . Conditions (3) and (4) imply that there is a path
in the graph of L T
11 and there is a path in the graph of U 11 for
Because the matrix is irreducible, then A 12 6= O and A 21 6= O. This means
that there exist indices s such that rs 6= 0: The
existence of the previously mentioned paths implies that
where W 11 and Z 11 are the exact inverse factors of A 11 . The approximate
inverse factors from the AINV algorithm satisfy [6]
O -
Z 11
Hence, the conditions of Theorem 3 are satisfied and the result is proved. 2
It is instructive to consider two extreme cases. If A 11 is diagonal, then the
approximate Schur complement is necessarily equal to the exact one, and is
therefore singular. In this case, of course, the conditions of the last theorem
are violated. On the other hand if A 11 is irreducible and tridiagonal, its inverse
factors are completely dense and by the last theorem it is enough to drop a
single entry in each inverse factor to obtain a nonsingular approximate Schur
complement.
The purpose of the theory developed here is to shed light on the observed
robustness of the proposed preconditioner rather than to serve as a practical
tool. In other words, it does not seem to be necessary to check these conditions
in advance. Indeed, thanks to dropping the approximate Schur complement
was always found to be a nonsingular M-matrix in actual computations.
6 Numerical experiments
In this section we report on results obtained with a parallel implementation of
the preconditioner on several Markov chain problems. The underlying Krylov
subspace method was Bi-CGSTAB [27], which was found to perform well for
Markov chains in [15]. Our FORTRAN implementation uses MPI and dynamic
memory allocation. The package METIS [18] was used for the graph
partitioning, working with the graph of A + A T whenever A was not structurally
symmetric.
The test problems arise from real Markov chain applications and were provided
by T. Dayar. These matrices have been used in [15] to compare different
methods in a sequential environment. A description of the test problems is
provided in Table 1 below. Here n is the problem size and nnz the number of
nonzeros in the matrix. All the test problems are structurally nonsymmetric
except ncd and mutex. Most matrices are unstructured.
Tables
2-11 contain the test results. All runs were performed on an SGI Origin
2000 at Los Alamos National Laboratory (using up to 64 processors), except
for those with matrices leaky, ncd and 2d which were performed on an Origin
2000 at the Helsinki University of Technology (using up to 8 processors). In
all cases, the initial guess was a constant nonzero vector; similar results were
obtained with a randomly generated initial guess. In the tables, "P-time"
denotes the time to compute the preconditioner, "P-density" the ratio of the
number of nonzeros in the preconditioner to the number of nonzeros in the
matrix A, "Its" denotes the number of iterations needed to reduce the ' 2 -
norm of the initial residual by eight orders of magnitude, "It-time" the time
to perform the iterations, and "Tot-time" the sum of "P-time" and "It-time."
All timings are in seconds. Furthermore, "Sep-size" is the cardinality of the
Information on test problems.
Matrix n nnz Application
hard 20301 140504 Complete buffer sharing in ATM networks
Multiplexing model of a leaky bucket
2d 16641 66049 A two-dimensional Markov chain model
telecom 20491 101041 A telecommunication model
ncd 23426 156026 NCD queueing network
mutex 39203 563491 Resource sharing model
qn 104625 593115 A queueing network
separator set (i.e., the order of the Schur complement matrix) and "Avg-
dom" the average number of vertices in a subdomain (subgraph) in the graph
partitioning of the problem. The drop tolerance - in the AINV algorithm was
the same at both levels of the preconditioner (approximate inversion of A i for
approximate inversion of the approximate Schur complement
S), except for the mutex problem (see below).
Tables
present results for the matrix hard, using three different values of
the drop tolerance in the AINV algorithm. It can be seen that changing the
value of - changes the density of the preconditioner and the number of itera-
tions. However, the total timings are scarcely affected, especially if at least 8
processors are being used. See [8] for a similar observation in a different con-
text. It is also clear from these runs that good speed-ups are obtained so long as
the size of the separator set is small compared to the average subdomain size.
As soon as the separator set is comparable in size to the average subdomain or
larger, the sequential bottleneck represented by the Schur complement part of
the computation begins to dominate and performance deteriorates. The number
of iterations remains roughly constant (with a slight downward trend) as
the number of processors grows. This is due to the influence of the approximate
Schur complement.
The same problem was also solved using Bi-CGSTAB with diagonal precondi-
tioning. This required approximately 700 iterations and 16.4 seconds on one
processor. If implemented in parallel, this method would probably give results
only slightly worse than those obtained with AINV. A similar observation
applies to matrices qn and mutex. On the other hand, diagonally preconditioned
Bi-CGSTAB did not converge on the telecom problem. Hence, AINV
is a more robust approach. Furthermore, the ability to reduce the number of
iterations, and therefore the total number of inner products, is an advantage
on distributed memory machines, on which inner products incur an additional
penalty due to the need for global communication.
Matrix hard,
P-time 2.35 1.19 0.65 0.40 0.34
P-density 6.21 5.90 5.67 5.14 4.52
Its
It-time 9.43 2.91 1.42 0.99 0.88
Tot-time 11.8 4.10 2.07 1.39 1.22
Sep-size 156 321 540 900 1346
Avg-dom 10073 5155 2470 1213 592
Table
Matrix hard,
P-time 1.19 0.60 0.35 0.22 0.21
P-density 3.10 3.02 2.95 2.78 2.52
Its 106 109 99 98 97
It-time 6.85 2.90 1.59 1.05 1.19
Tot-time 8.04 3.50 1.94 1.27 1.40
Table
Matrix hard,
P-time 0.73 0.38 0.22 0.15 0.14
P-density 1.39 1.36 1.34 1.28 1.19
Its 170 167 159 151 153
It-time 6.03 2.52 1.50 1.30 1.64
Tot-time 6.76 2.90 1.72 1.45 1.78
Results for matrices leaky and 2d are reported in Tables 5 and 6. These two
matrices are rather small, so only up to 8 processors were used. Note that the
speed-ups are better for 2d than for leaky. Also notice that the preconditioner
is very sparse for leaky, but rather dense for 2d.
Tables
7 and 8 refer to the telecom test problem. Here we found that very
small values of - (and, consequently, very dense preconditioners) are necessary
in order to achieve convergence in a reasonable number of iterations. This
problem is completely different from the matrices arising from the solution of
elliptic partial differential equations. Notice the fairly small size of the sep-
Matrix leaky,
P-time 0.26 0.17 0.09
P-density
No. its 134 134 132
It-time 1.39 0.84 0.62
Tot-time 1.65 1.01 0.71
Sep-size 48 144 335
Avg-dom 4105 2028 990
Table
Matrix 2d,
P-time 0.38 0.16 0.09
P-density 8.60 7.12 6.54
No. its 33 36 37
It-time 1.46 0.56 0.38
Tot-time 1.84 0.72 0.47
Sep-size 129 308 491
Avg-dom 8256 4083 2018
Table
Matrix telecom,
P-time 24.9 10.0 3.35 1.21 1.05 1.37
P-density 167 116 70 44 28
Its 11 12 14 13 12 12
It-time 36.5 14.0 6.04 0.96 0.38 0.43
Tot-time 61.4 24.0 9.39 2.17 1.43 1.80
Sep-size 34 97 220 471 989 1603
Avg-dom 10229 5099 2534 1251 609 295
arator set, which causes the density of the preconditioner to decrease very
fast as the number of processors (and corresponding subdomains) grows. As
a result, speed-ups are quite good (even superlinear) up to 32 processors. For
a sufficiently high number of processors, the density of the preconditioner be-
Matrix telecom,
P-time 7.04 3.78 1.37 0.61 0.49 0.56
P-density
No. its
It-time 59.4 27.0 12.8 2.18 1.36 1.71
Tot-time 66.4 30.8 14.2 2.79 1.86 2.27
Table
Matrix
P-time 1.42 0.69 0.31
P-density 4.13 2.65 1.90
No. its 292 288 285
It-time 17.0 8.45 6.38
Tot-time 18.4 9.14 6.69
Sep-size 3911 6521 12932
Avg-dom 9758 4226 1379
Table
Matrix
P-time 1.19 0.40 0.10
P-density 0.14 0.14 0.14
No. its
It-time 1.64 1.19 1.51
Tot-time 2.83 1.59 1.61
Sep-size 13476 17749 20654
Avg-dom 12864 5363 2319
comes acceptable, and the convergence rate is the same or comparable to that
obtained with a very dense preconditioner on a small number of processors.
Tables
results for matrices ncd and mutex, respectively. For the
first matrix we see that the separator set is larger than the average subdomain
already for nevertheless, it is possible to use effectively up
to 8 processors. Matrix mutex exhibits a behavior that is radically different
Matrix qn,
P-time 4.12 2.37 2.26 2.82
P-density 1.27 1.23 1.18 1.14
No. its
It-time 13.4 6.33 4.58 5.68
Tot-time 17.5 8.70 6.84 8.50
Sep-size 2879 6579 13316 20261
Avg-dom 50873 24511 11414 5273
from that of matrices arising from PDE's in two or three space dimensions.
The separator set is huge already for 2. This is due to the fact that the
problem has a state space (graph) of high dimensionality, leading to a very
unfavorable surface-to-volume ratio in the graph partitioning. In order to solve
this problem, we had to use two different values of - in the two levels of AINV;
at the subdomain level we used when forming the approximate
Schur complement we dropped everything outside the main diagonal, resulting
in a diagonal -
S. In spite of this, convergence was very rapid. Nevertheless, it
does not pay to use more than processors.
In
Table
11 we report results with the largest example in our data set, qn.
This model consists of a network of three queues, and is analogous to a three-dimensional
problem. Because of the fairly rapid growth of the separator
set, it does not pay to use more than processors.
The test problems considered so far, although realistic, are relatively small.
Hence, it is difficult to make efficient use of more than 16 processors, with
the partial exceptions of matrices hard and telecom. To test the scalability of
the proposed solver on larger problems, we generated some simple reliability
problems analogous to those used in [2] and [5]; see also [26], page 135. These
problems have a closed form solution. In Table 12 we show timing results for
running 100 preconditioned Bi-CGSTAB iterations on a reliability problem of
entries. This problem is sufficiently
large to show the good scalability of the algorithm up to processors.
We conclude this section on numerical experiments by noting that in virtually
all the runs, the preconditioner construction time has been quite modest and
the total solution time has been dominated by the cost of the iterative phase.
Reliability model,
P-time 9.53 4.86 2.49 1.31 0.90 0.90
P-density 4.05 4.02 3.97 3.90 3.80 3.70
It-time 138.8 70.5 37.2 16.8 9.47 7.96
Tot-time 148.3 75.4 39.7 18.1 10.4 8.86
Sep-size 542 1229 2186 3268 4919 7336
Avg-dom 124729 62193 30977 15421 7659 3792
Conclusions
We have investigated the use of a parallel preconditioner for Krylov subspace
methods in the context of Markov chain problems. The preconditioner is a
direct approximation, in factorized form, of a (1; 2)-inverse of the generator
matrix A, and is based on an A-biorthogonalization process. Parallelization
is achieved through graph partitioning, although other approaches are also
possible. The existence of the preconditioner has been justified theoretically,
and numerical experiments on a parallel computer have been carried out in
order to assess the effectiveness and scalability of the proposed technique.
The numerical tests indicate that the preconditioner construction costs are
modest, and that good scalability is possible provided that the amount of
work per processor is sufficiently large compared to the size of the separator
set.
The method appears to be well suited for problems in which the generator
can be explicitly formed and stored. Parallelization based
on graph partitioning is usually effective, with the possible exception of problems
with a state space of high dimensionality (i.e., a large state descriptor
set). For such problems, a different parallelization strategy is needed in order
to achieve scalability of the implementation.
Acknowledgements
. We would like to thank Professors M. Gutknecht and
W. Sch-onauer for their kind invitation to take part in this commemoration of
our friend and colleague R-udiger Weiss. We are indebted to Tu-grul Dayar for
providing the test matrices used in the numerical experiments and for useful
information about these problems, as well as for his comments on an early
version of the paper. Thanks also to Carl Meyer for his valuable input on
generalized inverses.
--R
The arithmetic mean method for finding the stationary vector of Markov chains
A sparse approximate inverse preconditioner for the conjugate gradient method
A parallel block projection method of the Cimmino type for finite Markov chains
A sparse approximate inverse preconditioner for nonsymmetric linear systems
A comparative study of sparse approximate inverse preconditioners
Approximate inverse preconditioning in the parallel solution of sparse eigenproblems
Nonnegative Matrices in the Mathematical Sciences (Academic Press
Distributed steady state analysis using Kronecker algebra
Incomplete factorization of singular M-matrices
Generalized Inverses of Linear Transformations (Pitman Publishing Ltd.
Fast wavelet iterative solvers applied to the Neumann problem
A priory sparsity patterns for parallel sparse approximate inverse preconditioners
Comparison of partitioning techniques for two-level iterative solvers on large
Parallel preconditioning with sparse approximate inverses
Asynchronous iterations for the solution of Markov systems
A fast and high quality multilevel scheme for partitioning irregular graphs
Distributed disk-based solution techniques for large Markov models
An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix
The role of the group generalized inverse in the theory of finite Markov chains
the theory of nearly reducible systems
Matrix Analysis and Applied Linear Algebra (SIAM
Experimental studies of parallel iterative solutions of Markov chains with block partitions
Preconditioned Krylov subspace methods for the numerical solution of Markov chains
Introduction to the Numerical Solution of Markov Chains (Princeton University Press
BiCGSTAB: A fast and smoothly converging variant of Bi-CG for the solution of nonsymmetric linear systems
--TR
Incomplete factorization of singular M-matrices
Stochastic complementation, uncoupling Markov chains, and the theory of nearly reducible systems
BI-CGSTAB: a fast and smoothly converging variant of BI-CG for the solution of nonsymmetric linear systems
Iterative solution methods
A Sparse Approximate Inverse Preconditioner for the Conjugate Gradient Method
Parallel Preconditioning with Sparse Approximate Inverses
A Sparse Approximate Inverse Preconditioner for Nonsymmetric Linear Systems
A Fast and High Quality Multilevel Scheme for Partitioning Irregular Graphs
A comparative study of sparse approximate inverse preconditioners
Matrix analysis and applied linear algebra
Comparison of Partitioning Techniques for Two-Level Iterative Solvers on Large, Sparse Markov Chains
A Priori Sparsity Patterns for Parallel Sparse Approximate Inverse Preconditioners
--CTR
Aliakbar Montazer Haghighi , Dimitar P. Mishev, A parallel priority queueing system with finite buffers, Journal of Parallel and Distributed Computing, v.66 n.3, p.379-392, March 2006
Ilias G. Maglogiannis , Elias P. Zafiropoulos , Agapios N. Platis , George A. Gravvanis, Computing the success factors in consistent acquisition and recognition of objects in color digital images by explicit preconditioning, The Journal of Supercomputing, v.30 n.2, p.179-198, November 2004
Nicholas J. Dingle , Peter G. Harrison , William J. Knottenbelt, Uniformization and hypergraph partitioning for the distributed computation of response time densities in very large Markov models, Journal of Parallel and Distributed Computing, v.64 n.8, p.908-920, August 2004
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | singular matrices;generalized inverses;Bi-CGSTAB;graph partitioning;AINV;parallel preconditioning;discrete Markov chains;iterative methods |
606859 | On Two Applications of H-Differentiability to Optimization and Complementarity Problems. | In a recent paper, Gowda and Ravindran (Algebraic univalence theorems for nonsmooth functions, Research Report, Department of Mathematics and Statistics, University of Maryland, Baltimore, MD 21250, March 15, 1998) introduced the concepts of H-differentiability and H-differential for a function f : Rn Rn and showed that the Frchet derivative of a Frchet differentiable function, the Clarke generalized Jacobian of a locally Lipschitzian function, the Bouligand subdifferential of a semismooth function, and the C-differential of a C-differentiable function are particular instances of H-differentials.In this paper, we consider two applications of H-differentiability. In the first application, we derive a necessary optimality condition for a local minimum of an H-differentiable function. In the second application, we consider a nonlinear complementarity problem corresponding to an H-differentiable function f and show how, under appropriate conditions on an H-differential of f, minimizing a merit function corresponding to f leads to a solution of the nonlinear complementarity problem. These two applications were motivated by numerous studies carried out for C1, convex, locally Lipschitzian, and semismooth function by various researchers. | Introduction
In a recent paper [10], Gowda and Ravindran introduced the concepts of the H-
differentiability and H-differential for a function f They showed that
Fr'echet differentiable (locally Lipschitzian, semismooth, C-differentiable) functions
are H-differentiable (at given x) with an H-differential given by frf(x)g (respec-
tively, @f(x), @B f(x), C-differential). In their paper, Gowda and Ravindran investigated
the injectivity of an H-differentiable function based on conditions on
H-differentials. Also, in [25], H-differentials were used to characterize P(P 0 )-
functions.
In this paper, we consider two applications of H-differentiability. In the first
application, we derive a necessary optimality condition for a local minimum of an
H-differentiable real valued function. Specifically, we show in Theorem 3 that if x
is a local minimum of such a function f , then
an H-differential of f at x .
In the second application, we consider a nonlinear complementarity problem
NCP(f) corresponding to an H-differentiable function f
such that
By considering an NCP function \Phi associated with NCP(f) so that
x solves NCP(f);
and the corresponding merit function
in this paper (see Sections 6,7, and 8), we show how, under appropriate P 0 (P,
regularity)-conditions on an H-differential of f , finding local/global minimum of
\Psi (or a 'stationary point' of \Psi) leads to a solution of the given nonlinear complementarity
problem. Our results unify/extend various similar results proved in the
literature for C 1 , locally Lipschitzian, and semismooth functions [1], [5], [6], [7], [8],
[9], [11], [12], [13], [14].
2. Preliminaries
We regard vectors in R n as column vectors. We denote the inner-product between
two vectors x and y in R n by either x T y or hx; yi. Vector inequalities are interpreted
componentwise. For a set E ' R n , co E denotes the convex hull of E and co E
denotes the closure of co E. For a differentiable function f : R n
denotes the Jacobian matrix of f at x. For a matrix A, A i denotes the ith row of
A.
A function OE is called an NCP function if OE(a;
0: For the problem NCP(f ), we define
APPLICATIONS OF H-DIFFERENTIABILITY 3
(2)
and, by abuse of language, call \Phi(x) an NCP function for NCP(f ).
We now recall the following definition and examples from Gowda and Ravindran
[10].
Definition 1. Given a function f
where\Omega is an open set in
R n and x 2 \Omega\Gamma we say that a nonempty subset T (x ) (also denoted by T f (x )) of
R m\Thetan is an H-differential of f at x if for every sequence fx k g
'\Omega converging to
x ; there exist a subsequence fx k j g and a matrix A 2 T (x ) such that
We say that f is H-differentiable at x if f has an H-differential at x .
A useful equivalent definition of an H-differential T (x ) is: For any sequence x k :=
all k, there exist convergent subsequences
lim
Remarks As noted by a referee, it is easily seen that if a function f
R m is H-differentiable at a point x, then there exist a constant L ? 0 and a
neighbourhood B(x; ffi) of
x with
Conversely, if condition (4) holds, then T (x) := R m\Thetan can be taken as an H-
differential of f at x. We thus have, in (4), an alternate description of H-differentiability.
But, as we see in the sequel, it is the identification of an appropriate H-differential
that becomes important and relevant.
any function locally Lipschitzian at
x will satisfy (4). For real valued func-
tions, condition (4) is known as the 'calmness' of f at
x. This concept has been
well studied in the literature of nonsmooth analysis (see [24], Chapter 8).
As noted in [10], (i) any superset of an H-differential is an H-differential, (ii)
H-differentiability implies continuity, and (iii) H-differentials enjoy simple sum,
product and chain rules.
We include the following examples from [10].
differentiable at x 2 R n , then f is H-
differentiable with frf(x )g as an H-differential.
Example 2 Let f
locally Lipschitzian at each point of an open
Let\Omega f be the set of all points
in\Omega where f is Fr'echet differentiable. For
denote the Bouligand subdifferential of f at x . Then, the (Clarke) generalized
Jacobian [2]
is an H-differential of f at x .
Example 3 Consider a locally Lipschitzian function f
that is
semismooth at x
2\Omega [17], [20], [22]. This means for any sequence x k ! x , and
for any V k 2 @f(x k );
Then the Bouligand subdifferential
is an H-differential of f at x . In particular, this holds if f is piecewise smooth,
i.e., there exist continuously differentiable functions
Example 4 C-differentiable in a neighborhood D of x .
This means that there is a compact upper semicontinuous multivalued mapping
n\Thetan satisfying the following condition at any
a 2 D: For
Then, f is H-differentiable at x with T (x ) as an H-differential. See [21] for
further details on C-differentiability.
We recall the definitions of P 0 and P-functions (matrices).
Definition 2. For a function f : R n ! R n , we say that f is a P 0
(P)-function if,
for any x 6= y in R n ,
A matrix M 2 R n\Thetan is said to be a P 0 (P)-matrix if the function
is a P 0
(P)-function or equivalently, every principle minor of M is nonnegative
(respectively, positive [3]).
APPLICATIONS OF H-DIFFERENTIABILITY 5
We note that every monotone (strictly monotone) function is a P 0 (P)-function.
The following result is from [18] and [25].
Theorem 1 Under each the following
function.
(a) f is Fr'echet differentiable on R n and for every x 2 R n , the Jacobian matrix
rf(x) is a P 0 (P)-matrix.
(b) f is locally Lipschitzian on R n and for every x 2 R n , the generalized Jacobian
@f(x) consists of P 0 (P)-matrices.
(c) f is semismooth on R n (in particular, piecewise affine or piecewise smooth)
and for every x 2 R n , the Bouligand subdifferential @B f(x) consists of P 0 (P)-
matrices.
(d) f is H-differentiable on R n and for every x 2 R n , an H-differential T f (x)
consists of P 0 (P)-matrices.
3. Necessary optimality conditions in H-differentiable optimization
In this section, we derive necessary optimality conditions for optimization problems
involving H-differentiable functions. We first consider the H-differentiability of
minimum/maximum of several H-differentiable functions.
Theorem 2 For be H-differentiable at x with an
H-differential R be defined by
where I(x
is H-differentiable at x with T f
as an H-differential. Also, a similar statement holds if 'min' in (6) is replaced by
'max'.
Proof. We prove the result for the min-function; the proof of the max-function is
similar. Consider a sequence fx k g converging to x in R n . Then there exist l 2
ng and a subsequence fx k j g such that f(x k j
We have f(x (by the continuity of f l and f ). Now because of the H-
differentiability of f l at x , there is a subsequence of fx k j g, which we continue to
write as fx k j g for simplicity, and a matrix A l 2 T f l (x ) such that
which leads to
that f is H-differentiable at x
with (defined in (7)) as an H-differential. This completes the proof.
Remark In the above theorem, we considered real valued functions. With obvious
modifications, one can consider vector valued functions. See Example 8 for an
Theorem 3 Suppose f : R n ! R and x is a local optimal solution of the problem
min
If f is H-differentiable at x and T (x ) is any H-differential, then
Proof. Suppose, if possible, that 0 62 co T (x closed and convex,
by the strict separation theorem (see p.50, [15]), there exists a nonzero vector d in
R n such that Ad ! 0 for all A 2 co T (x From the H-differentiability of f , for the
sequence fx
k dg, there exist a subsequence fx
dg and
Ad:
Since f(x) f(x ) for all x near x , we see that
Ad 0 reaching a contradiction.
Hence
Remarks When f is differentiable at x with T (x )g, the above optimality
condition reduces to the familiar condition rf(x locally
Lipschitzian at
x, the above result reduces to Proposition 2.3.2 in [2] that
see also, Theorem 7 in [17].
The above theorem motivates us to define a stationary point of the problem min f(x)
as a point x such that is an H-differential of f at x .
By weakening this condition, we may call a point x a quasi-stationary point (semi-
stationary point) of the problem min f(x) if
While local/global minimizers of min f(x) are stationary points, it is not clear how
to get or describe semi- and quasi- stationary points. However, as we shall see in
Sections 6, 7, and 8, they are used in formulating conditions for a point x to be a
solution of a nonlinear complementarity problem.
We now describe a necessary optimality condition for inequality constrained optimization
problems.
APPLICATIONS OF H-DIFFERENTIABILITY 7
Theorem 4 Suppose that f and g i are real valued functions defined
on R n and x is a local optimal solution of the problem
minimize f(x)
subject to g i (x) 0 for
Suppose that f and g i are H-differentiable at x with H-differentials
respectively, by T f
and I(x
Proof. We see that x is a local optimal solution of the problem
minimize f(x)
subject to g(x) 0: (10)
From Theorem 2, we see that g is H-differentiable with T g (x
as an H-differential. We have to show that
statement is false. Then by the strict separation theorem (see p.50, [15]), there
exists a nonzero vector d in R n such that Ad ! 0 for all A 2 T f
From the H-differentiability of f and g, for the sequence fx
k dg, there exist a
subsequence fx
dg, matrices
Ad
and
Bd:
From
we see that f(x
We reach a contradiction since x is assumed
to be locally optimal to the given problem. Thus we have the stated conclusion.
4. H-differentials of some NCP functions associated with H-differentiable
functions
In this section, we describe the H-differentials of some well known NCP functions.
Example has an H-differential T (x) at x 2 R n . Consider
the associated Fischer-Burmeister function [7]
where all the operations are performed componentwise. Let
Consider the set \Gamma of all quadruples (A; V; W; d) with A 2 T (x),
are diagonal matrices satisfying the conditions
and
when i 62 J(x)
when
arbitrary when i 2 J(x) and d 2
when i 62 J(x)
when
arbitrary when i 2 J(x) and d 2
We now claim that \Phi F (or \Phi for simplicity) has an H-differential at
x given by
To see this claim, let x 1: By the H-
differentiability of f , there exist a subsequence ft k j
g of ft k g, d k j ! d, and A 2 T (x)
such that f(x
Let, for ease of notation,
d k j . With A and d, define V and W satisfying (11) and (12); let
We claim that \Phi(y j
To see this, we
fix an index i and show that \Phi i (y
loss
of generality, let 1. We consider two cases:
Case
In this case we have
T is the first row of the identity matrix and
APPLICATIONS OF H-DIFFERENTIABILITY 9
Case
Subcase
In this case,
and an easy calculation shows
In this case d
These arguments prove that \Phi i (y
holds for all i.
Thus we have the H-differentiability of \Phi with S(x) as an H-differential.
Remarks We observe that in the above example, if T (x) consists of P-matrices
then S(x) consists of P-matrices. To see this, suppose that every A 2 T (x) is a P-matrix
and consider any A is a P-matrix, there exists an
index j with x j 6= 0 such that x j in (12) are nonnegative
and their sum is positive, x j [Bx]
It follows that B is a P-matrix.
This observation together with Theorem 1 says that if T (x) consists of P-matrices
then the function \Phi F is a P-function. (In fact, \Phi F is a P-function
whenever f is a continuous P-function, see [23].)
We note that S(x) may not consist of P-matrices if f is merely a P-function on
R n . This can be seen by the following example. Let
P-function and \Phi F
. By a simple calculation, we see that
the f2; 0g is an H-differential of \Phi F at zero and that it contains a singular object.
Example 6 In the previous example, we described the H-differential of Fischer-
Burmeister function. A similar analysis can be carried out for the NCP function
[13]
where is a fixed parameter in (0; 4). We note that when reduces to
the Fischer-Burmeister function, while as ! 0; \Phi(x) becomes
Let
An H-differential of \Phi in (15) is given by
is the set of all quadruples (A; V; W; d) with A 2 T (x),
are diagonal matrices satisfying the conditions
and
when i 62 J(x)
when
arbitrary when i 2 J(x) and (d
when i 62 J(x)
when
arbitrary when i 2 J(x) and (d
Example 7 The following NCP function is called the penalized Fischer-Burmeister
function [1]
is a fixed parameter. Let
For \Phi in (18), a straightforward calculation shows that an H-differential is given
by
is the set of all quadruples (A; V; W; d) with A 2 T (x),
are diagonal matrices with
when
when
arbitrary when i 2 J(x) and d 2
APPLICATIONS OF H-DIFFERENTIABILITY 11
when
when
arbitrary when i 2 J(x) and d 2
The above calculation relies on the observation that the following is an H-
differential of the one variable function t 7! t + at any t:
\Delta(
Example 8 For an H-differentiable function f consider the NCP
function
We claim that the H-differential of \Phi is given by
To see this claim, let x k ! x: By the H-differentiability of f , there exist a sub-sequence
of fx k g, which we continue to write as fx k g for simplicity, and a matrix
By considering a
suitable subsequence, if necessary, we may write ng as a disjoint union of
sets ff and fi where
Put
We show that \Phi(x k
To see this, we fix an
index j and show that \Phi j
simplicity). We have two cases:
Case
\Theta \Phi(x k
\Theta f(x k
Case It is easy to verify that \Phi 1
This proves the above claim.
5. The H-differentiability of the merit function
In this section, we consider an NCP function \Phi corresponding to NCP(f) and let
Theorem 5 Suppose \Phi is H-differentiable at x with S(x) as an H-differential.
Then \Psi := 1jj\Phijj 2 is H-differentiable at x with an H-differential given by
Proof. Consider a sequence fx
there exist d
We have
ff \Gamma2 h\Phi(x); \Phi(x)i
ff
\Gamma2
This gives us
lim
This completes the proof.
6. Minimizing the merit function under P 0
-conditions
For a given function f consider the associated NCP function \Phi and
the corresponding merit function It should be recalled that
x solves NCP(f):
One very popular method of finding zeros of \Phi is to find the local/global minimum
points or 'stationary' points of \Psi. Various researchers have shown, under certain
that when f is continuously differentiable or more generally locally
Lipschitzian, 'stationary' points of \Psi are the zeros of \Psi. In what follows, starting
with an H-differentiable function f , we show that under appropriate conditions, a
vector x is a solution of the NCP(f) if and only if zero belongs to one of the sets
Theorem 6 Suppose f : R n ! R n is H-differentiable at
x with an H-differential
is an NCP function of f: Assume that \Psi := 1jj\Phijj 2 is H-differentiable
at
x with an H-differential given by
APPLICATIONS OF H-DIFFERENTIABILITY 13
Further suppose that T (x) consists of P 0 -matrices. Then
Proof. Clearly, implies that T \Psi by the description of T \Psi (x):
Conversely, suppose that 0 2 T \Psi (x), so that for some \Phi(x) T [V A +W
yielding A T y Note that for any index
which
case y i
0, contradicting the P 0 -property of A. We conclude that
In the next two successive theorems, we replace the condition
conditions relaxations come at the
expense of imposing either stronger or different conditions on the H-differential of
f .
First we recall a definition from [26].
Definition 3. Consider a nonempty set C in R n\Thetan . We say that a matrix A is
a row representative of C if for each index row of A is the
ith row of some matrix C 2 C. We say that C has the row-P 0 -property (row-P-
property) if every row representative of C is a P 0 -matrix (P-matrix). We say that
C has the column-P 0 -property (column-P-property) if C Cg has the
We have the following result from [26].
Proposition 1 A set C has the row-P 0 -property (row-P-property) if and only if
for each nonzero x in R n there is an index i such that x i
for all C 2 C.
A simple consequence of this proposition is the following.
Corollary 1 The following statements hold:
(i) Suppose the set of matrices fA has the row-P 0 -property. Then
for any collection fV of nonnegative diagonal matrices, the sum
A
is a P 0 -matrix. In particular, any convex combination of the A i s is a P 0 -matrix.
(ii) Suppose the set of matrices fA has the row-P-property. Then for
any collection fY of nonnegative diagonal matrices with
A
is a P-matrix.
Proof. (i) Let x 6= 0 in R n . By the above proposition, there exists an index
i such that x i 6= 0 and x i
This proves the P 0 -property of A . By specializing
we get the additional statement.
(ii) Let x 6= 0: By Proposition 1, there exists an index i such that x i 6= 0 and
Now we have x i
the terms of the above sum are nonnegative. If (Z
which means that
we see that x i
is a P-matrix.
Remark We note that the implications in the above corollary can be reversed: if
every A in (i) ((ii)) is a P 0 -matrix (respectively, P-matrix), then fA
has the row-P 0 -property (respectively, row-P-property). Peng [19] proves results
similar to Corollary 1 under additional/different hypotheses.
Theorem 7 Suppose is H-differentiable at
x with an H-differential
T (x). Suppose that \Psi is H-differentiable at
x with an H-differential given by
Further suppose that T (x) has the row-P 0 -property. Then
Proof. Suppose and we have co T \Psi
versely, suppose 0 2 co T \Psi (x). Then by Carath'eodory's theorem [15], there exist
that
where
Lg. We rewrite (22) as
APPLICATIONS OF H-DIFFERENTIABILITY 15
reduces to
where
diagonal matrix
and jZ i the equality unchanged if we
replace Y i by jY i j and Z i by jZ i j, we may assume that Y i and Z i are nonnegative
for all i. Now suppose, if possible, that By the above corollary, the
matrices M and M T are P 0 -matrices. Therefore, there exists an index i such that
0. From \Phi(x) i
6= 0, we see that (W j
and so (Z
0: But
(\GammaZ u) i
is clearly a contradiction since u i
This proves that
Remarks We note that Theorems 6 and 7 are applicable to the Fischer-Burmeister
function
This is because, the set T \Psi (x)
described in Theorems 6 and 7 is a superset of the H-differential T \Psi
described in Example 5. (Note that [\Phi F (x)] i
J(x) and hence from (12), v Similarly, we see that Theorems 6 and 7 are
applicable to the following NCP functions:
(Clarification Example
(Clarification Example 7)
We state the next result for the Fischer-Burmeister function \Phi. However, as in
Theorems 6 and 7, it is possible to state a very general result for any NCP function
\Phi. For simplicity, we avoid dealing in such a generality.
Theorem is H-differentiable at
x with an H-differential
T (x) which is compact and having the row-P 0 -property. Let \Phi be the Fischer-
Burmeister function as in Example 5 and \Psi := 1jj\Phijj 2 . Let S(x) and T \Psi (x) be as
in Example 5 and Theorem 5. Then the following are equivalent:
(a)
x is a local minimizer of \Psi.
(c)
x solves NCP(f).
Proof. The implication (a) ) (b) follows from Theorem 3. The implication
(c) ) (a) is obvious. We now prove that (b) ) (c). Suppose 0 2 co T \Psi (x) and
assume that there exists a sequence fC k g of matrices in
co S(x) such that Now each C k is a convex combination of at most
matrices of the form V A+W 2 S(x) where A 2 T (x), V and W satisfy (11)
and (12). Since T (x) is compact and the entries of V and W vary over bounded
sets in R, we may assume that C k ! C where C is a convex combination of at most
matrices of the form V A +W where A 2 T (x), V and W are nonnegative
diagonal matrices satisfying a condition like (11) with
and
when
g. From
an equation similar to (22) but now with V i , A i , and W i in place of V i , A i , and
respectively. By repeating the argument given in the proof of the previous
theorem, we arrive at a contradiction. Hence proving (b) ) (c) .
We now state two consequences of the above theorems for the Fischer-Burmeister
function (for the sake of simplicity).
differentiable and \Phi(x) be the Fischer-Burmeister
function and
x is a local minimizer to \Psi
if and only if x solves NCP(f).
This corollary is seen from the above theorem by taking T frf(x)g. If we
assume the continuous differentiability of f in the above corollary, we get a result
of Facchinei and Soares [5]: For a continuously differentiable P 0 -function f , every
stationary point of \Psi solves NCP(f ). (This is because, when f is C 1 , \Psi becomes
continuously differentiable, see Prop. 3.4 in [5].) See [9] for the monotone case.
locally Lipschitzian. Let \Phi be the Fischer-
Burmeister function and the equivalence
holds under each of the following conditions.
(a) @f(x) consists of P 0 -matrices;
(b) @B f(x) has the row-P 0 -property.
Proof. The stated equivalence under (a) has already been established by Fischer
[8]. In fact, by applying Theorem 6 with T f using his result
that @\Psi(x) ' T \Psi (x) for all x, we get the equivalence in (a). Now to see the
equivalence under (b), assume (b) holds. Then by Corollary 1, every matrix in
Now we have condition (a) and hence the
stated equivalence.
Remark The condition (b) in the above corollary might be especially useful when
the function f is piecewise smooth in which case @B f(x) consists of a finite number
of matrices.
APPLICATIONS OF H-DIFFERENTIABILITY 17
7. Minimizing the merit function under P -conditions
The following theorem is similar to Theorem 6.
Theorem 9 Suppose f : R n is H-differentiable at
x with an H-differential
is an NCP function of f: Assume that \Psi := 1jj\Phijj 2 is H-differentiable
at
x with an H-differential given by
Further suppose that T (x) consists of P-matrices. Then
Proof. Suppose 0: Then by description of T \Psi (x); we have T \Psi
Conversely, suppose that 0 2 T \Psi (x), so that for some \Phi(x) T [V A +W
We claim that 0: Suppose, if possible,
that \Phi(x) 6= 0: If which leads to
a contradiction since for some
Hence y 6= 0 and
contradicting the P-property of A. Hence
Theorem is H-differentiable at
x with an H-differential
T (x). Suppose that \Psi is H-differentiable at
x with an H-differential given by
Further suppose that T (x) has the row-P-property. Then
Proof. The proof is similar to that of Theorem 7. To show that 0 2 co T \Psi (x) )
proceed as in the proof of Theorem 7. We have statements (22)
and (23) in our new setting where we may assume (as before) that Y i and Z i
are nonnegative for all i. Since
taking
we see that the matrix in (23) is nonsingular. It follows that
Remark We note that Theorems 9 and 10 are applicable to the min-function \Phi of
Example 8.
8. Minimizing the merit function under regularity (strict regularity)
conditions
We now generalize the concept of a regular (strictly regular) point [14] in order to
weaken the hypotheses in the Theorems 6 and 7.
For a given H-differentiable function f and
we define the following
subsets of I = ng.
Definition 4. Consider f ,
x, and the index sets as above. Let T (x) be an H-
differential of f at
x. Then the vector x 2 R n is called a regular (strictly regular)
point of f with respect to T (x) if for every nonzero vector z 2 R n such that
z
there exists a vector s 2 R n such that
Theorem is H-differentiable at
x with an H-differential
\Phi be an NCP function satisfying the following conditions:
Suppose \Psi is H-differentiable with an H-differential given by
x is a regular point if and only if x solves NCP(f).
Proof. Suppose that 0 2 T \Psi (x) and
x is a regular point. Then for some
APPLICATIONS OF H-DIFFERENTIABILITY 19
We claim that 0: Assume the contrary that
x is not a solution of NCP(f ).
x is a regular point, and y and
z have the same sign, by taking a vector s 2 R n satisfying (25) and (26), we have
and
contradict (30). Hence x is a solution to NCP(f ). The 'if'
part of the theorem follows easily from the definitions.
Remark Theorem 11 is applicable to the NCP functions of Examples 5, 6 and 7.
A slight modification of the above theorem leads to the following result.
Theorem is H-differentiable at
x with an H-differential
\Phi be an NCP function satisfying the following conditions:
Suppose \Psi is H-differentiable with an H-differential given by
x is a strictly regular point if and only if
x solves NCP(f).
Proof. The proof is similar to that of Theorem 11.
Concluding Remarks
In this paper, we considered two applications of H-differentiability. The first
application dealt with the necessary optimality condition in H-differentiable op-
timization. In the second application, for a nonlinear complementarity problem
corresponding to an H-differentiable function, with an associated NCP function
\Phi and a merit function described conditions under which every
global/local minimum or a stationary point of \Psi is a solution of NCP(f ). We
would like to note here that similar methodologies can be carried out for other
merit functions. For example, we can consider the Implicit Lagrangian function of
Mangasarian and Solodov [16]:
\Theta
fixed parameter and x y is the Hadamard (=componentwise)
product of vectors x and y. (In [16], it is shown that
By defining the merit function
and formulating the concept of strictly regular point, we can extend the results of
[4] for H-differentiable functions.
Our results recover/extend various well known results stated for continuously
differentiable (locally Lipschitzian, semismooth, C-differentiable) functions.
Acknowledgements
We thank the referees for their constructive comments.
--R
"A Penalized Fischer-Burmeister NCP-Function: Theoretical Investigation and Numerical Results,"
Optimization and Nonsmooth Analysis
The Linear Complementarity Problem
"On Unconstrained and Constrained Stationary Points of the Implicit Lagrangian,"
"A New Merit Function for Nonlinear Complementarity Problems and Related Algorithm,"
"Regularity Properties of a Semismooth Reformulation of Variational Inequalities,"
"A Special Newton-Type Optimization Method,"
"Solution of Monotone Complementarity Problems with Locally Lipschitzian Functions,"
"On the Resolution of Monotone Complementarity Problems,"
"Algebraic Univalence Theorems for Nonsmooth Functions,"
"A New Nonsmooth Equations Approach to Nonlinear Complementarity Problems,"
"Unconstrained Minimization Approaches to Nonlinear Complementarity Prob- lems,"
"A New Class of Semismooth Newton-Type Methods for Nonlinear Complementarity Problems,"
"A Semismooth Equation Approach to the Solution of Nonlinear Complementarity Problems,"
"Nonlinear Complementarity as Unconstrained and Constrained Minimization,"
"Semismooth and Semiconvex Functions in Constrained Optimization,"
"On P- and S- Functions and Related Classes of N-Dimensional Nonlinear Mappings,"
"A Smoothing Function and its Applications,"
"Convergence Analysis of Some Algorithms for Solving Nonsmooth Equations,"
"C-differentiability, C-differential Operators and Generalized Newton Methods,"
"Regularization of P0 -functions in Box Variational Inequality Problems,"
Variational Analysis
"On Characterizations of P- and P 0 - Properties in Nonsmooth Functions,"
"On Some Properties of P-matrix Sets,"
--TR
--CTR
M. A. Tawhid, On the local uniqueness of solutions of variational inequalities under H-differentiability, Journal of Optimization Theory and Applications, v.113 n.1, p.149-164, April 2002
Jong-Shi Pang , Defeng Sun , Jie Sun, Semismooth homeomorphisms and strong stability of semidefinite and Lorentz complementarity problems, Mathematics of Operations Research, v.28 n.1, p.39-63, February | merit function;nonlinear complementarity problem;locally Lipschitzian function;NCP function;h-differentiability;generalized Jacobian |
606884 | Robust Optimal Service Analysis of Single-Server Re-Entrant Queues. | We generalize the analysis of J.A. Ball, M.V. Day, and P. Kachroo (Mathematics of Control, Signals, and Systems, vol. 12, pp. 307345, 1999) to a fluid model of a single server re-entrant queue. The approach is to solve the Hamilton-Jacobi-Isaacs equation associated with optimal robust control of the system. The method of staged characteristics is generalized from Ball et al. (1999) to construct the solution explicitly. Formulas are developed allowing explicit calculations for the Skorokhod problem involved in the system equations. Such formulas are particularly important for numerical verification of conditions on the boundary of the nonnegative orthant. The optimal control (server) strategy is shown to be of linear-index type. Dai-type stability properties are discussed. A modification of the model in which new customers are allowed only at a specified entry queue is considered in 2 dimensions. The same optimal strategy is found in that case as well. |
Figure
1. Re-entrant Server
There is much current interest in developing optimal service strategies for queueing systems. The volume
by Kelly and Williams [9]includes several articles addressing this. Although queueing models are generally
integer-valued and stochastic, Dai [4]and others have developed connections between the stability of
Date: November 30, 2001.
Research supported by the ASPIRES program of Research and Graduate Studies, and the Millennium program of the College
of Artsand Sciences, both of Virginia Tech.
stochastic queuing systems and their deterministic fluid limits. Thus optimal strategies for fluid models
are recognized as significant for stochastic models. Fluid models for a large class of queueing systems can
be described by equations of the general form (5) introduced in Section 2 below. We pursue the same robust
control approach as in [2]for such models. Much of what we present here is a further development
of ideas from that paper. In particular, Section 2.1 gives explicit representations of the velocity projection
map (x, v) of the Skorokhod reflection mechanism which comes into play when one or more queues are
empty. Section 3 develops the construction of the value function for our control problem. Here, as in [2],
the construction proceeds without regard to the Skorokhod dynamics on the boundary of the nonnegative
orthant. (In more general multiple server examples the Skorokhod dynamics will play a more decisive role.)
Even so, the solution we construct must be shown to satisfy various inequalities associated with optimality
with respect to the Skorokhod dynamics on the boundary. We do not provide a deductive proof of these
inequalities, but rely instead on a system of numerical confirmation for individual test cases in Section 4. The
explicit representations of (x, v) are important for this, and for the optimality argument of Section 5. The
version of that argument given here improves on the one in [2]in that it applies to all admissible strategies,
not just those of state feedback form.
Our model allows new arrivals and unserved departures in the form of an exogenous load qi(t)foreach
xi; see (1) below. In some queueing applications this feature would be inappropriate. For instance in typical
re-entrant lines, new arrivals only occur at a specified entry queue and departures only as service is completed
at a designated final queue. In Section 6 we will look at the the 2-dimensional case of our model under the
more restrictive assumption that exogenous arrivals are only allowed in the entry queue x1. This requires a
number changes in our calculations. But we find that this change to the model does not eect the resulting
optimal service policy.
2. The Model and Approach of Optimal Control
We describe in this section the general model formulation and performance criteria that we will use. Fluid
models for a large class of queueing systems can be described by equations of the nominal form
(1) x(t)=q(t) Gu(t).
The state variable is n-dimensional: x =(x1,.,xn) Rn. For queueing models x(t) must remain in the
nonnegative orthant, K in (4) below. For that purpose we will couple (1) with Skorokhod problem dynamics,
resulting in (5) below. The term q(t)istheload on the system due to new arrivals (or unserved departures
if qi(t) < 0). The service allocation is specified by the control function u(t) whose values are taken from
a finite set U0 Rm of possible service control settings. For purposes of an adequate existence theory for
solutions to (5) we relax this to allow u(t) to be taken from the convex hull
(2) U =convU0.
For our single-server examples we willsimply take the standard unit vectors in Rn.Thus
1. The matrix G converts u(t) to the appropriate vector of
contributions to x. For the case to be considered here (Figure 1) G will be the lower triangular matrix
.
. sn1 0
The si > 0 are parameters which specify the service rates for the respective queues. Thus when u(t)=ek
(k<n), the eect of Gu(t) in (1) is to drain queue xk at rate sk with the served customers entering xk+1
at the same rate:
Multiple server examples are easily modeled by (1) and (5) as well. Consider for example the 2-server
re-entrant line in Figure 2. It would be natural to use
The first two columns in G correspond to the service allocation at server A and the second two at server
B. The u U0 correspond to the 4 dierent combinations in which server A chooses between x1 or x3 and
server B chooses between x2 or x4.
x3
Figure
2. 2-Server re-entrant line
2.1. Skorokhod Problem Dynamics. We denote by K the nonnegative orthant of Rn:
The faces of K are
The interior normal to iK is the standard unit vector We will use
to denote the set of all coordinate indices. For x K,
I(x)={i
will denote the set of indices with zero coordinate values.
An essential feature of queueing models is that x(t) remains in K for all t. One could simply impose
this as a constraint the control functions u(t) and loads q(t) which are considered admissible. Although
some constraints on the load are reasonable, we find it much more natural in general to couple (1) with the
dynamics of a Skorokhod problem. On each face iK we specify a constraint vector di. If the solution of
(1) attempts to exit K through iK, then the idea is to add some positive multiple of di to the right side of
(1) to prevent the exit. A precise formulation is the following: given x(0) K and q(t),u(t) (which we will
assume to be locally integrable), let
Problem is to find a continuous function x(t) K, a measurable function r(t) Rn and a
nondecreasing function (t) 0 which satisfy the following for t 0:
(0,t]
for each t, r(t)= iI(x(t)) idi for some i 0;
(t)= (0,t] 1x(s)K d(s).
By imposing a normalization there will be a unique solution, provided K and di satisfy certain
conditions. Dupuis and Ishii [5]and Dupuis and Ramanan [7]provide a substantial body of theory of
Skorokhod problems in general. In particular they show, using a velocity projection map (x, v), that a
Skorokhod problem can be expressed as a dierential system. The velocity projection map is of the form
for an appropriate choice of i 0. (See (6) below.) The result of coupling our (1) with the appropriate
Skorokhod problem is expressed as
holding almost surely.
The appropriate constraint vectors di are determined by the structure of the system in Figure 1. If
x iK and server i is active (ui > 0) but the applied service rate siui exceeds the inflow qi to xi, then
according to (1) x would exit K through iK. In an actual network the system could not really use the
full service capacity siui allocated to xi. Instead, service would take place at a lower level which exactly
balances the inflow and outflow of queue xi. Mathematically this is achieved by adding a positive multiple of
the column Gei of G to the right side of the system (1), bringing xi to 0 and producing the correct reduction
of the throughput xi xi+1 to the next queue. So we take di = s1 Gei (the normalization being so that
1). The same prescription is appropriate for the example of Figure 2: take di to be the unique
column of G having a positive entry in row i, normalized so that ni di =1.
At this point we wish to highlight the fact that no restrictions on u(t)andq(t) are needed to keep x(t)
in the nonnegative orthant; (5) will determine a state trajectory with x(t) K regardless. Thus we always
instruct the server to work at full capacity ( and the Skorokhod dynamics can be viewed as
automatically reducing the service rates to the levels that can actually be implemented. The model allows
a separate load term qi(t) for each queue. For re-entrant queues, one typically would only want to allow
for the entry queue in each re-entrant sequence. In Figure 1 for instance it would be natural to assume
nice feature of single servers with respect to the L2 performance criteria of Section 2.4
is that the the optimal strategy is the same for all loads, regardless of which coordinates might be zero. In
queueing applications one also naturally assumes that qi 0. However it was argued in [2]that, for purposes
of vehicular trac for instance, it is reasonable to consider qi < 0. This would correspond to customers that
leave the system without waiting to receive service all the way through. This is a reasonable consideration
in some applications. However it is hard to conceive of a realistic interpretation for qi < 0 when xi =0.
Even so, (5) will still yield a mathematical solution. The eect of the additional +idi terms in (x, q Gu)
might then be seen not as reductions to the service rates but as a transference of the reducing influence of
qi < 0 from the empty xi to the queues xj further along in sequence; fluid at xj would be drawn backwards
through the system to satisfy to external demand due to qi < 0atpreviousxi.
With di defined, we face the important technical issue of existence and regularity properties of the Skorokhod
problem. This issue is treated in detail in [5]and [7.] Those treatments consider a more general
convex polyhedron in place of our K. Our particular choice of the nonnegative orthant falls within the scope
of the earlier work [12]. Let D =[di]be the matrix with the constraint vectors as columns. In our case,
I Q where Q is the subdiagonal matrix with entries
Assuming that Q has nonnegative entries and spectral radius less than 1, both clearly satisfied for us, [12]
provided a direct construction of the solution of the Skorokhod problem. In [7]it is shown that these
conditions from [12]fall within the scope of a more general set of sucient conditions for existence and
Lipschitz continuity of the Skorokhod map y() x().
Drawing on the ideas of [12], we can give a direct construction of the (x, v) appearing in the dierential
formulation (5) of the Skorokhod problem. For any x K and v Rn, we will show that
be characterized using a linear complementarity problem:
subject to the following constraints for each i I(x):
For i/I(x) there is no constraint on wi, and we consider to be implicit. Let =[i]n .Using
I Q, and rewriting (6) as
it is easy to see that the complementarity problem is equivalent to saying is a fixed point =x()of
the defined coordinate-wise by
The notation y+ refers to the usual positive part: fixed point representation is a
particular case of the general fixed point representation of variational inequalities in Chapter 1 of [10].) We
first observe the existence of a unique fixed point. The argument of [12]is to observe that (after a linear
change of variables) is a contraction, under the nonnegativity and spectral radius assumption mentioned
above. However this is even simpler for our particular Q; =x() reduces to
which determine the i sequentially. Iteration of x from any initial will converge to the fixed point after
at most n steps. This makes it particularly simple to see that v and thus v (x, v) are continuous,
and to evaluate (x, v) numerically. Indeed (x, v) is Lipschitz in v for a fixed x, and is jointly continuous
in (x, v)ifx is restricted so that I(x) is constant.
We can easily check that (x, v) as defined by the above complementarity problem is indeed the velocity
projection map as identified in [5]. First, following an observation of [3], we can check that (y)=(0,y)is
the discrete projection map of Assumption 3.1 of [5]. Indeed, if y K then clearly
the complementarity problem: (0,y)=y.Fory/K,
y (0,y)=y
which is of the form for some 0and d(w), where d(x) is the set of reflection directions as defined
in [5, 3]. Next, suppose x K, v Rn, and let w,solve the complementarity problem for
above. We claim that
suciently small.
This will imply that
which is the characterization of (x, v)in[5,5.3]. Since w,solve (6), it follows that
di.We want to see that satify the complementarity conditions (7) -
associated with First consider i I(x). Since xi =0,wehave
For the prodiuct (9) we have
Next consider i/I(x). Provided h>0 is suciently small we have
which also confirms the product condition. This verifies (11), as desired.
For purposes of our calculations, the characterizations of (x, v) in Lemma 1 below will be useful. If
J N we will use
NJ =[nj]jJ and DJ =[dj]jJ
to denote the matrices whose columns are the normal vectors nj and constraint directions dj for the j J.
Given v, and the corresponding i as described above, let
From the complementarity problem we know F0 L I(x). Using any F0 F L the values of i, i F
are determined by setting wi =0,i F in (6). In other words we can solve for F =[i]iF directly in
and consequently
where RF is the reflection matrix
For simply take
More precisely,
The fact that wi 0fori/F is equivalent to
Suppose that we dont know F0 or L at the outset, but just take an arbitrary F I(x), calculate
v. By construction wi =0fori F and i =0fori/F. So item 3 of the
complementarity problem is satisfied. If item 1 is satisfied of i F, which is to say
and item 2 holds for i I(x) \ F, which is to say
then we can say that is in fact (x, v). This discussion proves the following lemma.
Lemma 1. Given x K, v Rn,andF I(x), the following are equivalent:
1. (x, v)=RF v;
2. both of the following hold:
(a) BF v 0 when
3. for some L with F L I(x) all of the following hold:
(a) BF v>0 when
(b) NL\F RF v =0when
(c) NIT(x)\LRF v>0 when
Notice that the strict inequality in 3 (a) simply identifies F as
2.2. The Optimal Control Policy. Our goal is to design a feedback control strategy (x), prescribing
a value in the extended control set U for each x K, so that using u(t)=(x(t)) produces optimal
performance of the system. The criteria used to determine optimality is based on
where >0 is a parameter. Roughly speaking, the control should keep the integrated cost (14) small,
so that x(t) remains small compared to the load q(t) in a time-averaged sense. We will give this a more
precise formulation in Section 2.4 below. The running cost 1 x2 2 q2 of (14) has it roots in classical
H control, and is attractive for its broad familiarity and success in a wide range of control applications.
Other choices might be more appropriate for particular queueing applications, such as those associated with
optimal draining and time-to-empty criteria; see [18]and [1.] There are however considerations that favor
L2 in the trac setting. For a given total customer population, the L2 norm favors balanced queue lengths
over a situation in which some queues are empty and others are full. When each customer is a person who
has to wait in a queue, a cost structure that can be minimized by using excessive waits for some small class
of customers would be considered unacceptable.
The optimal policy itself is easy to identify by naive considerations at this point. In order to minimize
for a given q(t) one would minimize 1 x(t)2, for which one would naturally try to choose u(t)tominimize d 1 In the interior of K, where (1) applies, this suggests that the optimal u
is that which maximizes x Gu over u U.OnK x is given by (5), which makes finding the u to minimize
potentially more dicult. However if we assume that all qi 0 then the Skorokhod dynamics
do not aect the minimizing u. To see this consider x K with suppose u Umaximizes xGu.
It is easy to see that sup x Gu > 0. Observe that for any i I(x), since o-diagonal entries
of G are nonpositive, x Gei 0. Thus the i-coordinate of u must be 0, from which we can conclude that
ni Gu 0. Since qi 0 by hypothesis, we see that
ni (q Gu) 0, for all i I(x).
This means that (x, q Gu)=q Gu. Also, for any i I(x)wehavex di 0 because
the i coordinate of di is positive. Therefore, for any u Uwe can say that
Thus the policy
U
is an obvious candidate for the optimal service policy. We will see below that it is indeed optimal in the
sense to be made precise in Section 2.4.
Several comments should be made at this point. First observe that (x) is set-valued. There are inherent
discontinuities in (x), as the optimum u jumps among the extreme points of U. When we replace u(t)
by (x(t)) in (5), we will want the resulting feedback system to have good existence properties. This is
addressed using the Filippov theory of dierential inclusions. For that it is important that (x) have closed
graph and be convex set-valued. (The notion of closed graph is often called lower semi-continuity for
set-valued functions.) It is easy to see that (15) has these properties.
Another important point to make is that the naive reasoning which suggests (x(t)) to us does not
actually imply that it achieves the smallest possible value of 1 x(T)2 for a given q() and target time T.Itis conceivable that it might be better to forgo pointwise minimization of d 1 x(t)2 in order to drive x(t)
into a dierent region (or section of the boundary) where larger reductions of x(t) could be achieved. Some
sort of dynamic programming argument, such as the Hamilton-Jacobi equation developed in Section 3, is
needed to adequately address such global optimality issues.
Although it may not have much practical import, one might ask whether allowing qi < 0 when xi =0
might aect the choice of u Uwhich minimizes x(x, q Gu). Indeed it can. If qi < 0 is large enough its
eect through the Skorokhod dynamics can produce cases in which a u/(x) minimizes x (x, q Gu).
This consideration would lead to an enhanced optimal policy which agrees with (x) on the interior of K,
but depends on both q and x when x K. Although no longer state-feedback, this enhanced control would
(we expect) produce lower values of (14), but only for those negative loads q(t) which, as we described above,
have the eect of drawing customers backwards through the system. Even so, this enhanced control would
not improve the performance of the system in the worst case sense of the dierential game formulated in
Section 2.4, as Theorem 1 below will assert.
2.3. Minimum Performance Criteria. Our strategy is expressed in state feedback form. Given a load
q(), the associated control function u(t) would be what results from solving the system
This system is a combination of a dierential inclusion, in the sense of Filippov, and a Skorokhod problem
as described above. The discussion in [2, Section 1.4]outlined how the arguments of [6]can be adapted to
establish the existence of a solution. A proof of uniqueness is more elusive. The usual Filippov uniqueness
condition would be that for some L
(xa xb) [(q Gua) (q Gub)]= ( xa xb) (Gub Gua) Lxa xb2.
This is immediate (using since by definition of (),
xa Gub xb Gua and xb Gua xb Gub
for all ua (xa), ub (xb). However, as noted in [2], when coupled with Skorokhod dynamics (16)
we are unable to conclude uniqueness based on existing results in the literature. Until that issue can be
addressed, we must allow the possibility of multiple solutions to (16). The uniqueness question is not
essential to our main result Theorem 1, however. We simply need to formulate its statement in such a way
that strategies are allowed to produce more than one control function u(t) for a given load q(t).
In general a service strategy () maps a pair x(0),q() to one or more control functions u(). We will
write u(t)=[x(0),q()](t), although this notation is not quite proper if there are actually more than one
u() associated with x(0),q()by. Rather than formulating a cumbersome notation to accommodate this,
we will simply use phrases like for any u(t)=[x(0),q()](t) to refer to all possible u(t). A strategy
should produce one or more control functions for any x(0) K and load function q() which is locally
square-integrable. We insist that a strategy be nonanticipating, in the sense that if q(s)=q(s) for all s t,
then for any u(t)=[x(0),q()](s) there is a u(t)=[x(0), q()](s) with u(s)=u(s) for all s t.Given
any such x(0), q() and a resulting u(t), the general existence and uniqueness properties of the Skorokhod
problem (e.g. [5]) provide a unique state trajectory x(t) K.
We will call a strategy () non-idling if for any nonnegative load qi(t) 0 for all i and all t 0, any
any u(t)=[x(0),q()](t), the resulting state trajectory x(t) has the property that ui(t) > 0
and occur simultaneously for some i only if In other words, all service eort is allocated
to nonempty queues, unless all queues are empty. In particular, our strategy (15) is non-idling, because if
x K and is the index of the largest nonzero coordinate of x.
One of the features of single servers as in Figure 1 is that for nonnegative loads, a non-idling strategy
will never invoke the Skorokhod dynamics on K, until it reaches Indeed if x(t) K \{0} but
the non-idling property means that 0, from which the structure of G implies that
ni Gu(t) 0.
Since qi(t) 0, we conclude that
ni (q(t) Gu(t)) 0.
Thus unless Multiple servers do not have this property. In the
case of Figure 2 for instance, if both then the service eort at B is wasted and the Skorokohd
dynamics will definitely come into play, regardless of x1 and x3. The Skorokhod dynamics will thus have a
stronger influence on the design of optimal strategies for multiple server models.
When considering those fluid models that arise as limits of discrete/stochastic queueing systems, the
stability criterion of Dai [4]is important for purposes of positive recurrence of the stochastic model. In that
setting the load q(t) is typically constant, with 1/qi equal to the mean time between new arrivals in queue
xi. The stability property of [4]is simply that for any x(0) K, the state x(t) reaches at a finite
T 0. For our single server model, all non-idling strategies are equivalent in this respect. To see why,
consider the vector
Observe that T G =(1, 1,. ,1), so that for all u Uwe have 1. For any nonnegative load q(t)
and the u(t) resulting form any non-idling strategy, we have (on any interval prior to the first time T when
d
x(t)= (x(t),q(t) Gu(t))
dt
Said another way, W(x)= x is a sort of universal Lyapunov function for all non-idling controls. Thus,
the first time T for which does not depend on the choice of non-idling control; it only depends on
the load q(t). For constant nonnegative loads q(t) q, the Dai stability property simply boils down to
Moreover if q =(q1, 0,. ,0)T then this reduces to the familiar load condition of [4, (1.9)]:
q1 < 1.
Figure
3 illustrates this stability property. We have taken the optimal strategy (x) for our model with
subjected the system to the constant disturbance q(t) (.4, 0). For these
parameters we find =(2, 1) and q so that the load condition (18) is indeed satisfied. The figure
illustrates the resulting trajectories of (16). When x(t) reaches the ray from the origin in the direction of ,
the solution of (16) in the Fillipov sense uses the averaged control value
which takes x(t) to the origin in finite time directly along the ray. One may check that this ray consists
of those x for which (x)=U is multiple-valued.
Theorem 1 below considers the optimality of with respect to all strategies that satisfy the following
minimum performance criterion: given x(0) K with
x(0) < 1,
there exists <1 so that whenever q() is a nonnegative load satisfying
q(t) 1 for all t,
and any u(t)=[x(0),q()](t) the resulting state trajectory satisfies
x(t) <for all t 0.
It is clear from our discussion above that every non-idling control satisfies the minimum performance criterion;
simply take
2.4. The Robust Control Problem. We now want to define more carefully the sense in which our service
strategy (x) is optimal. We follow the general approach of Soravia [14]to formulate a dierential game
based on (14). The focus is on a value function of the form
dt.
Here x K, the outer infimum is over strategies (), the inner supremum is over locally square integrable
loads q(), all u(t)=[x, q()](t) and bounded time intervals [0,T], with x(t) the resulting solution of (1) for
x1
Figure
3. Controlled Trajectories for q(t) (.4, 0).
The gain parameter >0 is customary in robust control formulations. However for the structure of our
problems scales out of the game (19) in a natural way. To see this, consider a particular load q(t), control
u(t), and solution x(t) of (5). Make the change of variables
Then x(t)= d x(s), and because K is a cone, (x, q Gu)=(x, q Gu). Thus x(s) solves (5) on the new
ds
time scale. With
ds.
If V ()=V1() is the value (19) for then the above implies that
V(x)=3V (1x).
From this point forward we simply take instead of V.
We can only expect V (x) < to hold in a bounded region. To see why, imagine a load q(t) which is large
on some initial interval 0 t s so as to drive the state out to a large value X, and then q(t)ischosenfor
t>sso as to maintain x(t)=X for t>s: q(t)=Gu(t). If X > supuU Gu, the integral in (19) grows
without bound as T , producing infinite value. We must exclude such scenarios from the definition of
the game. It turns out that the region in which V (x) will be finite is described using the vector of (17)
above:
We restrict the T in (19) to those for which x(t) remains in for all 0 t T.
This qualification on the state in turn requires us to place some limitations on the strategies () considered
as well. We need to exclude controls that cheat by encouraging x(t) to run quickly to the outer boundary
of to force an early truncation of the integral in (19). Such controls could achieve an artificially low value
by having actually destabilized the system. To exclude such policies we insist that all control strategies ()
satisfy the minimum performance criterion stated at the end of Section 2.2. With these qualifications, we
can now state precisely the optimality properties of the feedback strategy (x).
Theorem 1. Let and (x) be as defined above and suppose the boundary verifications of Section 4 have
been successfully completed. Using the control (x),forx ,define
where the supremum is over all loads q(t), all resulting control functions u(t)=[x, q()](t), and those
<T< such that the controlled state from
other control strategy () satisfying the minimum performance criterion,
with the same qualifications on the supremum.
The proof of these assertions will be discussed in Section 5 below. The qualification regarding the boundary
verifications of Section 4 will be explained in the last paragraph before Section 3.1.
3. Construction of the Value Function by Staged Characteristics
The proof in Section 5 of Theorem 1 is based on showing that the function V (x) of (21) solves the
Hamilton-Jocobi-Issacs equation associated with the game (19):
The Hamiltonian function is complicated by the special reflection eects on K:
The essential property of our strategy (x) for the proof is that, given x
point for the supq infU defining H(x, DV (x)) in (24) is given by any u (x). To be
specific, the minimum value of
over u Uis 0, achieved at and the maximum value of
over q Rn is 0, achieved at q. Together these imply (23). Our primary task is to produce V (x)and
establish this property of .
In general (23) must be considered in the viscosity sense. Lions [13]considers the viscosity-sense formulation
of a general class of problems involving Skorokhod dynamics on K. Instead of working with H as in
(24), the viscosity sense solutions are described using only the interior form of the Hamiltonian (27), together
with special viscosity sense boundary conditions on K. In our case it will turn out that the solution V is
actually a classical one. We find the direct formulation in terms of H more natural for our development.
We will construct the desired solution V (x) by working in the interior K, where the complicating eects
of (x, v) are not present: (x, v)=v so
=infHu(x, p).
uU
Here Hu refers to the the individual Hamiltonian for u U:
The supremum is achieved for Also observe that for u Uto achieve the infimum in (27) means
simply that u maximizes p Gu.Soforx K, (23) and the saddle point conditions (25) and (26) simply
reduce to the statement that for any u (x),
uU
We turn now to the construction of V (x) of by a generalized method of characteristics. We cover with
a family of paths x(t) as described below. The idea is that at a point the gradient of DV (x(t))
should be given by the costate trajectory p(t) that accompanies x(t). Thus a simple covering of by a
family of such paths will determine the values of DV (x) in . Knowing V
in the region. We itemize the essential features of this family of x(t),p(t) in (30)-(33), and then explain
their relation to the Hamilton-Jacobi-Isaacs equation and saddle point property above. To begin, the paths
must solve the system of ODEs
for some piecewise constant u(t) U. The value of u(t) may change from one time interval to another,
but at each time t we require the optimality condition
uU
Given an initial condition (depending on
x(0)) at which both x and p reach the origin:
Lastly 0 t<T we require
Observe that (30) is the Hamiltonian system p). This is intimately
connected with the propertyt that p(t)=DV (x(t)) for a solution of Hu (x, DV We will return to
this issue, near the end of Section 3.2 explaining why the manifold of (x, p) formed by our solution family is
truly the graph of a gradient also that for 0 t T we have
The formula (28) for Hu shows that (34) is indeed satisfied at
(32). It is a general property of Hamiltonian systems as in (30) that the value of Hu (x(t),p(t)) is constant
with respect to t. Property (31) implies that the jumps in u(t) do not produce discontinuities with respect
to t in (34). Therefore (34) follows as a consequence of (30) - (32). Thus (34) and (31) give us (29) for
u(t) in particular. The construction of x(t),p(t) in Section 3.2 will show that u(t) (x(t)) and that
extends to all u (x(t)). This will provide the saddle point conditions (29) on the interior.
The equation H(x, DV has many solutions, if it has any at all. One property of the particular
solution we want is that V be associated with the stable manifold of (30), in accord with the general approach
of van der Schaft [15, 16, 17]to robust nonlinear control. We see this in the convergence to the origin of
above. Another important property is that
To this end, notice that the formula (28) for an individual Hamiltonian, together with (34), impliess that
So for p(t)=DV (x(t)), (33) is the same as saying
d
dt
If we stipulate that V which is (35). One
may wonder why we have insisted on (32). Observe that (33) (in the limit as t 0) implies that
necessary if
A family of x(t),p(t) as described above will give us a function V (x) which has the desired saddle point
properties at interior points. However for x K both (25) and (26) are complicated by the nontrivial
structure of (x, v). We claim the V (x) so constructed does in fact satisfy the saddle point conditions (25)
and (26) at x K as well. We do not give a mathematical proof of this. Instead we have developed a
scheme of numerical confirmation that can be applied to test this claim for any specification of si. This is
described in Section 4. We also note the requirement in (32) above that x(t) for all 0 t<T, given
. This follows if we can verify that whenever x(t) iK then
ni (p Gu) 0.
We rely on numerical tests for this fact as well. (See the discussion of (52) in Section 4.) Based on the
success of these tests for numerous examples, we conjecture that (25) and (26) are true in general. The
reference to the boundary verifications of Section 4 in Theorem 1 indicates that the validity of that result
depends on the success of those tests.
3.1. Identification and Properties of the Invariant Control Vectors. We will construct the family
x(t),p(t) as above by generalizing the development of [2]. The key is to look for solutions that approach
the origin as in (32) using a constant control u. The solution of (30) with constant
conditions x(T)=0=p(T)is
Observe that for 0 t T /2 the values of both sin(t T) and 1 cos(t T) will be positive. Now
consider what (31) requires of (38):
uU
There are only a finite number of such they provide the key to the explicit representation of
the family of solutions x(t),p(t) that we desire.
We will call any GU satisfying (39) an invariant control vector. To simplify our discussion here, let
denote the columns of G. (In more general models, gi would be the extreme points of GU.) To say
GU means that is a convex combination of the gi: 1. For an as in
consider the set of indicies
It follows from (39) that every j J achieves the maximum value of gi over i N. Therefore
Our construction of V (x) depends on the fact that there is a unique such associated with every
nonempty subset J N. The existence of J depends on properties of our particular set of gi, but the
uniqueness does not. So we present the uniqueness argument separately as the following lemma.
Lemma 2. Suppose gi, i =1,. ,m are nonzero vectors in Rn and J {1,. ,m} is nonempty. If there
exists a vector J as described in (40) then it is unique. Suppose J and J exist for both J J.Then
Proof. We establish (41) first. Without assuming uniqueness, suppose exists as in (40). It
follows that
Now suppose both J and J exist for J J. Then the same reasoning implies that
which is (41). Regarding uniqueness, suppose for the
same J. In that case (41) implies
But then (42) implies gj J. This means is orthogonal to the span of {gj,
But since it is also in the span, we are forced to conclude that =.
It is not dicult to determine whether or not J exists for a given J. If it does, the values
must be a nonnegative solution of the linear system
From such a nonnegative solution we can recover j from
check (40). With this observation we can prove that J exists for all J N in our single-server model.
Theorem 2. Assume the specific G and U0 of our model (see Section 2). For every nonempty J N there
exists a unique invariant control vector J . Moreover the j, j J in (40) are strictly positive.
Proof. Let GJ =[gj]jJ be the matrix whose columns are the gj for just those j J. Observe that (43)
simply says J =[j]jJ must solve
GTJ GJ J =1J .
For the existence of nonnegative j in (43) it is enough to show that GTJ GJ is invertible and that all entries
of its inverse are nonnegative. Consider the diagonal matrix
and let
Note that MJ is nothing but GJ for the particular case of all
it is enough to show that (MJT MJ )1 exists and has nonnegative entries. Now observe that MJT MJ is block
diagonal
. Ak1 0
where the A and B are tridiagonal of the form
121 . 121 .
One may check by explicit calculation that denoting the size of A,
c +1
Since all entries are positive in both cases, it follows that all entries of (MJT MJ )1 and hence (GTJ GJ )1
are nonnegative, as desired. Since no rows are identically 0, all i are positive in (43) and therefore the
respective j > 0 can always be found.
Next, we need to show that gj J >gi J for i/J, j J. First observe that gj
constant over j J. Also note that for
We know that j > 0. So if i/J,
Therefore, gi J 0 for every i/J,andgj J >gi J . Lemma 2 gives the uniqueness.
Observe that of (17) is a scalar multiple of N . Indeed, in the notation of the above proof,
It follows that = N /N N . In particular the of (20) is alternately described as
Fundamental to our construction is the existence and uniqueness of the following representation of x
using a nested sequence of invariant control vectors.
Theorem 3. Assume the specific G and U0 of our model. Every nonzero x has a unique representation
of the form
for some 0 <aj, aj < 1 and J1 . Jk N. Moreover,
Proof. Consider any nonzero x . We first solve
G.
The reader can check that G1 has all nonnegative entries, which implies that all i 0. Therefore every
x K can be written as
Next, let and consider the invariant control vector
Now consider
(j aj)gj
Our choice of a implies
for all j J. However for one or more j J,j aj =0. By induction on the number of positive
coecients in (47) it is possible to write
J. Simply taking am = a and completes the induction argument.
Next notice that since Ji
2.
From the hypothesis that x we conclude that aj < 1.
Now consider J1 in (45) and any j, j J1. Then j, j Ji for all i, which from (40) tells us that
However if j J1 but j / J1 then
while for i>1, gj Ji gj Ji (depending on whether j Ji or not). We conclude that
gj x>gj x.
This proves that J1 is the set of j for which gj x takes its largest possible value, as claimed.
Regarding uniqueness, since G is nonsingular the in are uniquely determined, and then Jk from
the last term of (45) is necessarily the J above. Since Jk1 Jk,
k1
(j akj)gj
still must have nonnegative coecients j akj. But only those for j Jk1 can be positive. Hence
J. This implies that ak = a as well. Thus Jk and ak are uniquely determined.
Repeating the argument on
k1
ajJjgives the uniqueness of the other aj,Jj .
The following lemma records two other facts that will be used below.
Lemma 3. Assuming the G and U0 of our model,
a) All coordinates of N are positive.
b) If x K is as in Theorem 3, and if xi =0, then i/J1.
Proof. We have already observed that is as in (17). This proves a). Next suppose
It follows that x gi 0. On the other hand, there does exist j with x gj > 0 (take j to be largest
with xj > 0 for instance). Thus the set J1 of those j with
does not include i.
It is important to realize that the single server re-entrant queue being studied here is special in that a
unique J is defined for every J N. This is not the case for more multiserver systems. For example,
consider the re-entrant line with two servers in Figure 2. The gi are Gui, ui U0 as in (3). Simple test
calculations reveal many J for which no J exists. Moreover it turns out that the gi are linearly dependent
so the points representable as in (45) can account for at most a 3-dimensional subset of
K.
3.2. Construction of the Characteristic Family. We can now exhibit the desired family of solutions to
(30). Consider a nested sequence J1 J2 Jk, and parameters
coecient functions ai(t)andi(t)acordingtotheformulas
(48) .
.
.
(Here again we use y+ to denote the positive part: k. For all 0 t T we
have
so that all ai(t)andi(t) are nonnegative. We claim that
provide a solution of of (30)-(33). The derivation of (48) is based on the calculations in [2]. Here we will
simply present as direct a calculation as possible. To that end, consider the partial sums appearing on the
left in (48):
Consider t in one of the intervals 1 <t<. Then for i<we have so that
For
Taking pairwise dierences we see that
Using this in (49), we find that for 1 <t<
Thus (30) is satisfied for 1 <t< using To confirm property (31) on that interval, observe
that since j(t)=0forj<,
we know from (46) that p(t) Gu is maximized over u Uat any u
for which only the j J coordinates are positive, in particular for
Implicit in this construction is a function defined by means of (49). Starting with x ,
express x as in (45). Then determine p(x)using
iJiwhere the partial sums of the i are determined from those of the ai according to
for 0 <i /2. This is the gradient map p(x)=DV (x) of our solution to (23). There are several facts to
record about p(x) before proceeding.
Theorem 4. The map p(x) described above is locally Lipschitz continuous in and satisfies the strict
inequality
for all x , x =0.
To see this, first consider what we will call a maximal sequence J1 J2 . Jn, i.e. and each Ji
has precisely i elements, with N. Consider the x representable as in (45) for this particular maximal
sequence. The maps x ai ai and i i p are linear. The maps ai i are simply
These are Lipschitz so long as ai remains bounded below 1. Since ai n in , we see that
p(x) is indeed Lipschitz in any compact subset of , constrained to those x associated with a fixed maximal
sequence of Ji. If in (45) we relax the positivity assumption to ai 0, then we can include additional Ji so
that every x is associated with one or more maximal sequence of Ji. The Lipschitz continuity argument
extends to the x associated with any given maximal sequence in this way. To finish the continuity assertions
of the theorem we need to consider x on the boundary between the regions associated with distinct maximal
sequences:
The uniqueness assertion of Theorem 3 means that the nonzero terms of both representations agree, which
implies that the corresponding terms of the expressions for p also agree:
Thus p(x) is continuous across the boundaries of the regions associated with dierent maximal sequences.
From this it follows that the (local) Lipschitz continuity assertion of the theorem is valid in all of .
The argument that p(x)2 < x2 is the same as [2, pg. 334, 335]. The strict inequality comes from the
fact that
Equality occurs only for n 1). Notice that for x , x
sin(n) < 1 implies that p(x)
The argument given in [2]that p(x) is indeed the gradient of a function V (x), x also generalizes to
the present context. In brief, the standard reasoning from the method of characteristics can be applied to
each of the individual Hamiltonians Hu where to see that DV (x)=p(x)
in the region associated with a given maximal sequence Ji. Continuity across the boundaries between such
regions allows us to conclude that there is indeed a C1 function V in with DV (x)=p(x). Taking V
implies that V (x) > by virtue of the discussion of (35) above.
Finally, we return to the connection of (31) with (x) and (29). Since U is convex,
and so (x) consists precisely of those u in the convex hull of ej, j J1, J1 being as in (45) for x. Because
in p(x)= iJi the i are positive when the corresponding ai are positive, we see that (x) has the
alternate description
U
In particular, the u(t) of (31) belongs to (x(t)). Moreover p(x)Gu has the same value for all u (x),
so
as desired in (29).
4. Verification of Conditions on the Boundary
We have completed the construction of V (x) satisfying (29) on the interior of . We now consider the
assertion of that for x K the resulting V remains a solution when the H of (27) is replaced by H as in
(27), and that any u (x) is a saddle point, as in (25) and (26). Specifically, we want
to confirm that for a given x K, its associated any u (x), the following hold:
Since we know Hu (x, imply (25), and (52) and (54) imply (26). Together these imply
(23).
Our validation of (52) - (54) consists of extensive numerical testing, as opposed to a deductive proof. We
will describe computational procedures below. Test calculations have been performed on numerous examples
(see Section 4.4), confirming (52) - (54) to within machine precision in each case. This gives us confidence
in the theoretical validity of (52) - (54), but until deductive arguments can be presented, their theoretical
validity must be considered conjectural.
4.1. Inactive Projection. Given x , the corresponding any u (x), (52) is
equivalent to the statement that
ni (p Gu) 0 for all i I(x).
This would be easy to check by direct calculation at a given x. However the second part of the following
lemma provides an equivalent condition which is even easier to check.
Lemma 4. The following are equivalent
1. ni (p(x) Gu) 0 for all x K with x =0,alu (x),andi I(x);
2. p(x)i 0 for all x K and i I(x);
3. p(x)i > 0 for all i and all x =0in the interior of .
Proof. Clearly (2) follows from (3) by continuity of p(x). To see that (2) implies (1), recall from our discussion
in Section 2.2 of the fact that is a nonidling policy that that ni Gu 0 for any u (x). Therefore
(2) implies
ni (p Gu) pi 0.
Finally, observe that (1) implies that the characteristic curves (49) do not exit K in forward time. From
any x(0) in the interior of , x(t) remains in K up to the time T at which
it follows that pi(x) > 0 for all i in x is in the interior of .
4.2. Control Optimality. Now we consider an approach to checking (53) at a given x K with its
associated any u (x). We want to check that u is the minimizer of
over u U. Observe that by virtue of (52)
Since pGu has the same value for all u (x) it suces to consider any single u (x) and to show that
it gives the minimum of p(x, pGu)overu U. Since this is a continuous function of u and U is compact,
we know that there does exist a minimizing u. Moreover for some F I(x), p(s, pGu)=pRF (pGu),
according to (13). So given F we can identify u as a maximizer of p RF Gu subject to the constraints of
Lemma 1 part 2). If (53) were false then an exception u would occur as a solution of such a constrained
minimization problem, for some F I(x).
There is no exception to (53) for because in that case
solves a standard linear programming problem:
subject to u U
BF (p Gu) 0, and
If u is an exception to (53) then so is any feasible maximizer uF to (55):
To verify (53) computationally we invoke a standard linear programming algorithm for (55) for each
nonempty subset F of I(x), and for each feasible maximizer so found, check that
We note that when I(x)={i} is a singleton we only need to check itself. In this case it is
sucient to check that
directly for each of j =1,. ,n. To see why, first observe that the last constraint in (55) is satisfied
vacuously. Since BF (p Gu) > 0, the same must hold for all u Usuciently close to u. It follows that u
gives a local maximum of p RF Gu over U. Since U is convex, it must be a global maximum. Therefore u
must be a convex combination of those ej for which p RF observe that since
the constraint
BF (p Gu)=ni (p Gu) > 0
is a scalar constraint. It must therefore be satisfied by one of the ej for which p RF
This means that this ej also solves (55). Hence when I(x) is a singleton it suces to check just the ej as
candidates for u, rather than invoking the linear programming algorithm.
4.3. Load Optimality. Once (52) is confirmed we know that for any u (x), (x, q Gu)=q Gu
and that
with respect to those q for which (x, q Gu)=q Gu, and that the maximal value is 0. To verify (54)
we need to be sure that there are not some other u (x) and q with (x, q Gu) =q Gu and for which
Since (x, v) is continuous and piecewise linear, and since (x) is a compact set, it follows that there does
exist a u (x) and q which maximizes (56) over q Rn and u (x). We derive necessary conditions
contingent on the specification of the subset F I(x) for which (x, q Gu)=RF (q Gu). Using part 3
of Lemma 1 we know that u =uand q =qsatisfy
Consider the ane set of all q satisfying (58). Since the inequalities are strict in (57), all q near q and
satisfying (58) must also have (x, q Gu)=RF (q Gu). Thus q =qis a local maximum of(59) p RF (q Gu) q2, subject to the constraint NLT\F RF (q Gu)=0.A simple calculation shows that this implies
where PL\F is the orthogonal projection onto the kernel of NLT\F the constraint (58) is vacuous
and we take Substituting this back into (59) and considering the result as a function of u,it
follows that u =uis a local (and hence global by convexity) solution of the quadratic programming problem:
subject to u (x),
BF PL\F (RFT p Gu) 0, and
To verify (54) computationally, we consider all pairs of subsets F L I(x). For each, we invoke a
standard quadratic programming algorithm to find a feasible maximizer u, if any exists. If such a u is found,
we take
and then check by direct calculation whether this is an exception to (54), as in (56). If we consider all
F L I(x) but find no such exceptions, then (54) is confirmed for this x, p.
Again we note that the quadratic programming calculation can be skipped in some cases. If
then (x, q Gu)=q Gu and we know there are no such exceptions to (54). Thus only need be
considered. Secondly, suppose I(x)={i} is a singleton. Then the only case to check is In
that case if there is an exception to (54), q must maximize
and satisfy
It follows that
But for the latter inequality simplifies to
ni (p Gu) <di p.
Moreover since di = ni ni+1 (with this is equivalent to
But i I(x) means i/J1,soni Gu 0 for all u (x). So (61) would imply pj < 0 for some j.If
we have already checked that p 0 in accord with Lemma 4 and our confirmation of (52), then we can be
sure no exceptions to (54) occur when I(x) is a singleton. Thus we only need to appeal to the quadratic
programming calculations when two or more xi are zero.
4.4. Test Cases. We begin our test of (30)-(33) for a specific choice of parameters s1,. ,sn by calculating
all the invariant control vectors J . Then on each face iK a rectangular grid of points x iK with
x N N N is constructed. For each grid point x we then compute the representation (45) and then the
associated according to (51). We then check that all pi 0 in accord with Lemma 4
and carry out the constrained optimization calculations described above for all possible F L I(x).
Obviously, the amount of computation involved will be prohibitive if the number of dimensions n is significant.
However, for modest n the calculations can be completed in a reasonable amount of time. We have carried
out these computations for numerous examples, including the following:
(s1,. ,sn)=(1, 1, 1)
No exceptions to (52)-(54) were found.
5. Proof of Optimality: Theorem 1
We turn now to the proof of the optimality assertions of Theorem 1. By hypothesis V (x) is as constructed
in Section 3.2, the saddle point conditions (25) and (26) have been confirmed, as well as the equivalent
conditions of Lemma 4. We know that () satisfies the minimum performance criterion of Section 2.4. As
explained above, V (x) > 0 for all x with load q(). The argument of [2, Theorem
2.1]shows that with respect to (), on any interval [0,T]on which x(t) remains in , we have
dt.
For a given x(0) , let x(t),p(t) be the particular path constructed according to (30), with x(T)=0.
We know that u(t) (x(t)) so x(t) is the controlled path produced by in response to the load q(t).
Along it we have from (36) that
and therefore
dt.
This establishes (21).
Next we consider an arbitrary strategy satisfying the minimum performance criterion. We would like to
produce a load q(t) which is related to the resulting state trajectory x(t)byq(t)=DV (x(t)). In [2, Theorem
2.3]this was accomplished by limiting to state-feedback strategies and appealing to an existence result for
Filippov solutions of the dierential inclusion [2, (2.23)]. Here we only approximate such a load. By taking
advantage of the properties of (x, ), our argument will not be limited to state-feedback strategies, and will
not need the Filippov existence result.
Given x we will show that for any =>0 there exists a load q(t) satisfying qi(t) 0andq(t) 1
for all t>0, and such that (for some u(t)=[x, q()](t))
holds for all T. The dicult question of existence for the closed loop system
for an arbitrary strategy is easily resolved by introducing a small time lag:
The system can now be solved incrementally on a sequence of time intervals [tn1,tn]where
For t [tn1,tn]the values of q(t) are determined by x(t) on the previous interval [tn2,tn1], so the basic
existence properties of the system under subject to a prescribed q(t) insure the existence of x(t)andq(t)
as above. Let u(t)=[x, q()](t) be the associated control function. Since q(t) is always a value of DV (x)
at some x , we know qi(t) 0andq(t) 1, and the minimum performance hypothesis insures that
x(t) remains in a compact subset of : x(t) , <1. We must explain how the time lag leads to the
+= term in (62).
Observe that because (x, v)=RF v for one of only a finite number of possible matrices RF , and because
there is a uniform upper bound on x:
Consequently,
x(t) x(t =et)B=et.
Next, on the subset of x with x , DV (x) is Lipschitz; see Theorem 4. It follows that for some
constant C1 (independent of =) such that
We observed previously that (x, v) in Lipschitz in v. It follows that for some constant C2 and all =, t > 0
We know that
it follows that
integrating both sides over [0,T]and replacing = by =/C2 yields (62).
With this q(t) in hand the remainder of the argument proceeds as in [2]: if there exists a sequence
with x(Tn) 0, then V (x(Tn)) 0 in (62) which implies
dt.
Suppose no such sequence exists. Then in addition to x(t) <1 we know x(t) does not approach
0; it must remain in a compact subset M of \{0}. From (63),
dt.
WealsoknowfromTheorem4that1 x2 1 DV (x(t))2 has a positive lower bound. Therefore the right
side in (64) is infinite. Thus (64) holds in either case. Since =>0 was arbitrary, (22) follows.
6. An Example with Restricted Entry
In this section we reconsider our model in modified so that the exogenous load
only applied to queue x1. This is illustrated in Figure 4. The system equations are now
where q(t) is a scalar and
,while the control matrix
the control values u Uand the constraint directions di all remain as before. We carry out the same general
approach to constructing V (x) as outlined at the beginning of Section 3. The details of the analysis are
dierent in several regards. This is significant because it shows that our general approach is not exclusive to
all the structural features of Sections 3.1 and 3.2. We will find that the optimal policy is the same (x)as
given in (15) above. In higher dimensions (n>2) it is interesting to speculate whether the optimal policy
would likewise remain unchanged if we removed the exogenous loads qi(t), i>1 . However, at present this
has only been explored in 2 dimensions.
Figure
4. Re-entrant Loop with Single Input Queue
The presence of M in (65) changes the individual Hamiltonian:
p1 being the first coordinate of p =(p1,p2). The supremum is achieved for p1. The corresponding
Hamiltonian system, for a given u U,is
We calculate the invariant control vector
as described in Section 3.1. Other than there are no additional J to consider.
To simplify notation we will drop the subscript N:
The first place we find a significant dierence from our previous analysis is in the calculation of a
solution to the Hamiltonian system associated with , analogous to (38). Previously, we did this using
being as determined by the construction of
because of the missing p2 term in the x2 equation of (66), we must use a Gu(t) which is both dierent from
and time dependent. We seek a solution x(t)=a(t),p(t)=(t) (both a(t)and(t) nonnegative) to
for some function 0 1. The overbar on x, p distinguishes this special solution from the others
encountered below. In light of the p equation and the terminal conditions (32), the solution we seek must
be of the form
for some function (t) 0. is a scalar multiple of , the right side of the x equation in
must also be a scalar multiple of . Since implies a relationship between (t)and(t),
which works out to be
Using this we can reduce (67) to a single second order dierential equation for (t):
(t)+A(t)=1,
where A is the constant
The solution (for initial conditions It will be convenient
for the rest of this discussion to fix so that
(One consequence of fixing is that for a given x the t<0 for which x(t)=x depends on x.) For
t 0 we confirm that 0 (t) 1, (t) 0anda(t)= (t) 0, as we wished.
We now have the desired solution:
This special solution provides the final stage of our family of paths x(t),p(t) as in (30)-(33) of Section 3,
but with some adjustment. In contrast to Section 3, our u(t)=[(t), 1(t)]T varies continuously, instead
of being piecewise constant. This means we have to pay closer attention to (34). Once again it is satisfied
at by virtue of the terminal conditions. When we calculate dt Hu(t)(x(t), p(t)), one term does not
automatically drop out:
d
dt
Since p is a scalar multiple of and we know g2, we do indeed find that Hu(t)(x(t), p(t)) 0.
The analogue of (33) for this example is
This is because q2 x2 =(p1(t))2 < x(t)2 when q(t)=p(t). Also note that
taking advantage of the fact that Hu (x, to verify (33) along x, p in particular, simply observe
that
since both and are positive (excepting In the following xi(t), pi(t) will refer to the individual
coordinates of this particular solution.
Our special solution x(t), p(t) provides the final stage (t1 <t 0) for each of the solutions in the family
described at the beginning of Section 3. The initial stage (t<t1 < 0) will be a solution of (66), with
either e1 or e2, which joins x, p at some t1 < 0: x(t1)=x(t1), p(t1)=p(t1). In other words we solve (66)
backwards from x(t1), p(t1),for the appropriate choice of u. It turns out that using produces that
part of the family which covers a region below the line x() in the first quadrant, and using
the x(t) which cover a region above x(). This is illustrated in Figure 5, for parameter values s1 =4,s2 =1.
Note that the region covered by this family, and hence the domain of V (x), is no longer the simple polygon
of (20).1.410.60.20
Figure
5. Characteristics for Restricted Entry Loop
We will need to verify that the resulting family indeed satisfies all the conditions outlined in Section 3.
These verifications are discussed below. Once confirmed, this implies that the optimal control (x) produces
e1 if x is below the line x(t), e2 if x is above the line, and any u Uif x is on the line. So although we
will not produce as explicit a construction for x p as we did in Section 3, we still find the same optimal
control
uU
6.1. Interior Verifications. We have already discussed properties (30)-(33) of Section 3 for the final stage
of our family of solutions: x(t)=x(t), p(t)=p(t)fort1 t 0. However we still need to verify (31) and
(68) for the initial segment t<t1. In Section 3.2 this followed from properties of the J and the rather
explicit formulae for x(t)andp(t) in terms of them. Here we have not developed such an elaborate general
structure. Instead we resort to direct evaluation of the needed inequalities. By solving (66) for
x(t1)=x(t1), p(t1)=p(t1) we obtain the formulas for the lower half of our family: for t<t1 < 0,
For any <t1 < 0, the above will be valid for t<t1 down to the first time at which x(1)(t) either reaches
the horizontal axis, or reaches the outer boundary of , curves appearing in
the figure, b(t1) <1(t1).) A formula for 1() is easily obtained from the expressions in (69). The value
can be identified as the point at which the determinant of the Jacobian of x(1) with respect to
vanishes. An explicit formula is possible for b() as well. (For brevity we omit both formulas.) Thus
is valid for
The points on the horizontal boundary 2K are x(1)(1(t1)) for those t1 with b(t1) 1(t1).
The analogous formulas for the upper half of our family are obtained by solving (66) for
x(t1)=x(t1), p(t1)=p(t1) to obtain the following expression for t<t1 < 0:
This time, for a given <t1 < 0, the valid range of t<t1 is slightly dierent. It turns out that x(2)(t)
always reaches the outer boundary of , at a time prior to contacting the vertical boundary 1K.
(Once again, an explicit formula for b() is obtained by setting the Jacobian of (70) equal to 0.) Thus given
t1 < 0, (70) is valid for
The vertical boundary itself is traced out by the solution for t1 =0:
x (t)=
valid for b(0) <t 0.
The availability of these formulas makes it possible to check the inequalities we need for (31) and (33).
For (31) we want to verify
For (73), it turns out that
),which certainly is positive for t<t1. We resort to numerical calculation to confirm (72). We have already
noted that (33) should be replaced by (68):
x(i)(t)2 (p(i)(t))2 > 0,for both i =1, 2. It is a straightforward task to prepare a short computer program that, given values
for s1,s2, evaluates (72) and (68) for a large number of t<t1 pairs extending through the full range of
possibilities. In this way we have confirmed the above inequalities numerically.
6.2. The Horizontal Boundary. Finally we must consider the influence of the projection dynamics at
points x K, confirming as we did in Section 4 that our remains a saddle point
when (x, v) is taken into account. This entails checking the same three facts, (52), (54), and (53) as before.
We consider the two faces of K separately.
The reflection matrix for 2K is
and
(x, v)= .
Now observe that
which is independent of q. Since it follows that (x, Mq Gu)=
Mq Gu, so that (52) reduces to (72). Moreover this independence of q also implies (54) since we know
is the saddle point in the absence of projection dynamics.
Next consider (53). The u Uare just
Note that corresponds to = 1. So for (53) we want to show that the minimum of
occurs at little algebra shows that for 0 s1s+2s2 we have
so that
For s1s+2s2 1wehaven2 (Mp1 Gu) 0, so that
Thus the function of in (75) is piecewise linear, in two segments. The slope of the right segment ( s2
1) is
which we already know to be negative, by virtue of our work in checking (72). Thus to establish (53) we
only need to check that the value for greater than that for =0:
which is equivalent to p g1 0. This we have confirmed numerically, by evaluating
for various choices of s1,s2 and t1 throughout its range.
6.3. The Vertical Boundary. Recall that along 1K we have and that from (71) we know
Thus (x, Mq Gu)=(Mq Gu), confirming (52).
The reflection matrix on 1K is
We already know that
over those q for which (x, Mq g2)=Mq g2. We need to consider the possibility of a global maximum
among those q with (x, Mq g2)=R{1}(Mq g2), namely q with n1 (Mq g2)=q 0. However,
is maximized at its maximum over q 0 must occur at confirms (54).
Finally, we turn to (53). Since any u as in (74) we have
Therefore, after a little algebra,
which is minimized at corresponds to verifies (53).
--R
An Introduction to Variational Inequalities and their Applications
Queueing Systems: Theory and Applications
Reflected Brownian motion on an orthant
Neumann type boundary conditions for Hamilton-Jacobi equations
Nonlinear state space H1 control theory
On optimal draining of re-entrant fluid lines
Department of Mathematics
--TR | skorokhod problem;queueing;robust control |
606888 | On an Augmented Lagrangian SQP Method for a Class of Optimal Control Problems in Banach Spaces. | An augmented Lagrangian SQP method is discussed for a class of nonlinear optimal control problems in Banach spaces with constraints on the control. The convergence of the method is investigated by its equivalence with the generalized Newton method for the optimality system of the augmented optimal control problem. The method is shown to be quadratically convergent, if the optimality system of the standard non-augmented SQP method is strongly regular in the sense of Robinson. This result is applied to a test problem for the heat equation with Stefan-Boltzmann boundary condition. The numerical tests confirm the theoretical results. | Introduction
We consider an Augmented Lagrangian SQP method (ALSQP method) for the following
class of optimal control problems, which includes some meaningful applications to control
problems for semilinear partial dierential equations:
Minimize
subject to y
In this setting Y and U are real Banach spaces, are
dierentiable mappings, and U ad is a nonempty, closed, convex and bounded subset of U .
The operator is a continuous linear operator from Y to U . In general, (P) is a non-convex
problem. We will refer to u as the control, and to y as the state.
In the past years, the application of ALSQP methods to optimal control or identication
problems for partial dierential equations has made considerable progress. The list of
contributions to this eld has already become rather extensive so that we shall mention
only the papers by Bergounioux and Kunish [6], Ito and Kunisch [13], [14], Kaumann
[15], Kunisch and Volkwein [16], and Volkwein [25], [26].
Supported by SFB 393 "Numerical Simulation on Massive Parallel Computers".
2 Parts of this work were done when the third author was visiting professor at the Universite Paul
Sabatier in Toulouse.
In this paper, we extend the analysis of the ALSQP method to a Banach space setting.
This generalization is needed, if, for instance, the nonlinearities of the problem cannot be
well dened in Hilbert spaces. In our application, this will concern the nonlinear mapping
. A natural consequence of this extension is that, in contrast to the literature about the
ALSQP method, we have to deal with the well known two-norm discrepancy. Another
novelty in our approach is the presence of the control constraints u 2 U ad in (P) , which
complicates the discussion of the method. To resolve the associated diculties, we rely
on known results on the convergence of the generalized Newton method for generalized
equations.
One of the main goals of this paper is to reduce the convergence analysis to one main
assumption, which has to be checked for the particular applications { the strong regularity
of the optimality system. In this way, we hope to have shown a general way to perform
the convergence analysis of the ALSQP method.
For (P) we concentrate on a particular type of augmentation, applied only to the
nonlinearity of the state equation. Splitting up the state equation into y
and z augment only the second equation. This type of augmentation
is useful for our application to parabolic boundary control problems. The convergence
analysis is conrmed by numerical tests, which are compared with those performed for the
(non-augmented) SQP method.
We obtain the following main results: If the optimality system of rst order necessary
optimality conditions for (P) is strongly regular in the sense of Robinson, then the ALSQP
method will be locally quadratic convergent under natural assumptions. This result is
applied to a boundary control problem for a semilinear parabolic equation. In [23], the
convergence of the (non-augmented) SQP method was shown for this particular problem
by verifying this strong regularity assumption. In this way, our result is immediately
applicable to obtain the convergence of the augmented method in our application.
The paper is organized as follows: In Section 2 we x the general assumptions and
formulate rst order necessary and second order sucient optimality conditions. Section
3 contains our example, a semilinear parabolic control problem. The ALSQP method is
presented in Section 4, where we show that its iterates are well dened in the associated
Banach spaces. The convergence analysis is developed in Section 5 on the basis of the
Newton method for generalized equations. The last part of our paper reports on our
numerical tests with the ALSQP method.
General assumptions and optimality conditions
We rst x the assumptions on the spaces and mappings. The Banach spaces Y and U
mentioned in the introduction stand for the ones where the following holds:
f is a mapping of class C 2 from Y U into R,
is a mapping of class C 2 from Y into U .
For several reasons, among them, the formulation of the SQP method and the sucient
second order optimality conditions, we have to introduce real Hilbert spaces Y 2 and U 2
such that Y (respectively U) is continuously and densely imbedded in Y 2 (respectively U 2 ).
Moreover, we identify U 2 with its dual U
. Therefore, denoting by U the dual space of U ,
we have the continuous imbeddings
Let us introduce the product space endowed with the norm jjvjj
jjujj U , and the space endowed with the norm jjvjj
Notations: We shall denote the rst and second order derivatives of f and by
Partial derivatives are indicated by associated
subscripts such as f y (v), f yu (v), etc. Notice that, by their very denition, f 0 (v) 2 V ,
U)). The open ball in V centered
at v, with radius r is denoted by B V (v; r). The same notation is used in other
Banach spaces. We will denote the duality pairing between U and U (resp. Y and Y )
by reserved in this paper for the scalar product
of U 2 .
Below we list our main assumptions:
(A1) is a linear, continuous, and bijective operator from Y 2 to U 2 . Moreover, its
restriction to Y , still denoted by , is continuous and bijective from Y to U . In
addition, we assume that U ad is closed in U 2 .
(Extension properties) For all r > 0 there is a constant c(r) > 0 such that, for
all
for all v 2 V; (2.1)
for all From (2.1) it follows that f 0 (v) can be considered as a continuous
linear operator from V 2 to R, and 0 (y) can be considered as a continuous linear
operator from Y 2 to U 2 .
Since 00 (y belongs to U , and U U , the term k 00 (y
ful. Moreover, f 00 (v) (respectively 00 (y)) can be considered as a continuous bilinear
operator from V 2 (respectively U ). In the second
order derivatives we shall write [v;
there is a c(r) > 0 such
that
for all z 2:
(Remainder terms) Let r F
denote the i-th order remainder term for the
Taylor expansion of a mapping F at the point x in the direction h. Following Ioe
[11] and Maurer [18] we assume
kr
For all y 2 Y , the operator is bijective from Y 2 to U 2 . Its restriction to
Y , still denoted by is bijective from Y to U .
For all belongs to b
Y , where b
Y is a Banach space continuously imbedded
in Y . For all belongs to U .
The restriction of (
Y is continuous from b
Y to U .
The rst assumption concerns the linearized state equation. The second and third
assumptions are needed to get optimal regularity for the adjoint equation. Indeed, the
adjoint state corresponding to
dened by
To study the convergence of the SQP method we need that
p belongs to U . Since by
denition f u (v) belongs to U , the condition f u (v) 2 U is a regularity condition on
f u (v).
In the analysis of the Generalized Newton Method, we need the following additional regularity
conditions.
(A6) For every y 2 Y , 0 (y) belongs to L(U; ^
Y ). The mapping y 7! 0 (y) is locally of
class C 1;1 from Y into L(U; b
Y ). For every y belongs to L(U; b
The mapping (y is locally of class C 1;1 from Y Y into L(U; b
(A7) The mapping v 7! f 0 (v) is locally of class C 1;1 from V into ^
Y U .
3 Example - Control of a semilinear parabolic equa-
tion
Let us consider the following particular case of (P) :
Z
a u u
Z
a y y
subject to
in
Here,
n is a bounded domain with boundary of class C 2 , T > 0; > 0, y T 2
2 and u a < u b are
given xed. The function ' : R ! R is nondecreasing, and locally of class C 2;1 . (The choice
ts into this setting.)
Let us verify that problem (E) satises all our assumptions. This problem is related to (P)
as follows:
fy
fy
where W (0; T ) is the Hilbert space dened by
dt
The space Y (respectively Y 2 ) is endowed with the norm kyk
us check the assumptions.
The operator is obviously continuous from Y 2 to U 2 , and is bijective from Y 2 to U 2
(see [17]). It is also a bijection from Y to U . (see [8], [20].) Thus (A1) is satised.
continuous imbedding ([8], [20]), we can verify that is a mapping
of class C 2 from Y into U , and that f is a mapping of class C 2 from Y U into R.
Moreover, for all v
f y (y
Z
(y
Z
a y (x; t)y(x; t) dSdt
f u (y
Z
Thus, the derivative f y (v can be identied with the triplet (0; y
()). The assumptions (2.1) and (2.3) can be easily satised.
To verify assumption (A5), let us introduce the space b
This space can be identied with the subspace of Y of all elements having the form
y 7!
Z
Z
y
Z
where (^y
y
Y . >From the above calculations, it is clear that f y (v
belongs to b
Y . Let y (d;a;u) be the solution to the equation
The operator (d; a; u) 7! y (d;a;u) is continuous and bijective from U 2 into Y 2 ([17]), and from
U into Y ([8], [20]). The rst part of (A5) is satised. To prove the second part, let us
consider the adjoint equation
y
For all (d; a; u) 2 U , and all ^
y
Y , by using a Green formula, we obtain
Z
Z
Z
Z
Z
y
Z
Therefore nothing else than (; (0); j ). With this identity, we
can easily verify the second part of assumption (A5).
Let us nally discuss properties of some second order derivatives. The second derivative
For
We can interprete 00 (y as an element of L 1 () L 1 () , and (2.2) can be checked.
The other assumptions on the second order derivatives, precisely (2.4) and (A4), are also
4 Optimality conditions
This section is devoted to the discussion of the rst and second order optimality conditions.
Let
u) be a local solution of (P) . This means that
holds for all v, which belong to a suciently small ball B V (v; ") and satisfy all constraints
of (P) .
Theorem 1 Let
u) be a local solution of (P ) and suppose that the assumptions
(A1), (A2), and (A5) are satised. Then there exists a unique Lagrange multiplier
such that
hf u (y;
Proof. Since f is Frechet-dierentiable at
u), is of class C 1 from Y to U , and
is surjective from Y to U , there exists a unique
such that (4.4) and (4.5)
be satised (see [12], and also Theorem 2.1 in [1]). The variational equation (4.4) admits a
unique solution
dened by
(v). Due to assumptions (A5), it follows
that
p belongs to U . 2
We next introduce the Lagrange function
The system (4.4)-(4.5) is equivalent to
For shortening, we shall write the adjoint equation (4.4) in the form f y (v)+p(+ 0
Thus the rst order optimality system for (P) is
hf
In what follows, the derivatives in L 0 and L 00 refer only to the variable v, but not to the
Lagrange multiplier p. Let us assume that
also satises the following:
(SSC) Second order sucient optimality condition
There is > 0 such that
holds for all that satisfy the linearized equation
Remark 1 The condition (SSC) is a quite strong assumption, and does not consider
active control constraints, which might occur in U ad . For instance, this can be useful
for constraints of the type U In
concrete applications, the use of an associated second order assumption is possible (see for
example [23]). However, we intend to shed light on the main steps, which are needed for
a convergence analysis of the augmented Lagrangian SQP method, rather than to present
the dicult technical details connected with weakening (SSC) . We shall adress this issue
again in section 6.
Let us complete this section by some simple results, which follow from the second order
sucient condition.
Lemma 1 Suppose that the assumptions (A1)-(A5) are satised. Suppose in addition
that v satises the second order sucient condition (SSC). Then there exists > 0 such
that, for every
p) given in B V U ((y; u;
for all that satisfy the perturbed linearized equation
Proof. We brie
y explain the main ideas of this quite standard result, to show where
the dierent assumptions are needed. If
p) is suciently close to (y;
p), then the
quadratic form L 00
p) is arbitrarily close to L 00 (y; u;
p). By (SSC), (A2), and (A3)
we derive that
provided that y+ 0 (^y) y analogous estimate has to be shown for the solutions
of the perturbed equation (4.11), where 0 is taken at ^
y. Write for short B := L 00
and dene z as the unique solution of z use the rst part of (A5)).
Then
The assumptions (A1), (A3), and (A5) ensure the estimate
ck^y
(4.
(here and below c stands for a generic constant). Therefore,
7=8 k(z; u)k 2
follows by (4.12), (4.14) and Young inequality, where " > 0 can be taken arbitrarily small.
Now we re-substitute z by y arrive by similar estimates at
provided that is suciently small. Thus (4.10) is proven. 2
Although we shall not directly apply the next result, we state it to show why the dierent
assumptions are needed. Some of them have been assumed to deal with the well known
two-norm discrepancy.
the optimality system (4:7) of (P ) and the second
order sucient condition (SSC). Suppose that the assumptions (A1)-(A5) are fullled.
Then there are constants " > 0 and > 0 such that the quadratic growth condition
holds for all admissible
Proof. The rst order optimality system implies
Subtracting the state equations for y and
y, analogously to (4.13) we nd that
1 . Then v h := (y
solves the linearized equation
(4:9), and the coercivity estimate of (SSC) can be applied to v h . Moreover, (A5) yields
khk Y2 c kr
We insert v h in (4.16), write for short B := L 00 (v;
p) and proceed similarly to the estimation
of Bv 2 in the last proof:
ckv
jr L
g:
In these estimates, the assumptions (A2) and (A3) were used. We have kv v v h k V 2
, and the estimate of h by the rst order remainder term r
1 can be inserted. Let
the
quadratic growth estimate follows from classical arguments. 2
This Lemma shows that the second order condition (SSC) is sucient for local optimality
of (y; u) in the sense of V , whenever (y; u) solves the rst order optimality system. Notice
that we cannot show local optimality in the sense of V 2 .
5 Augmented Lagrangian method
5.1 Augmented Lagrangian SQP method
In this section we introduce the Augmented Lagrangian SQP method (ALSQP) with some
special type of augmentation. For this, we rst represent (P) in the equivalent form
Minimize
subject to z
The augmentation takes into account only the nonlinear equation z
ALSQP method is obtained by applying the classical SQP method to the problem
Minimize f (y;
subject to z
where > 0 is given. We dene the Lagrange functional L for ( ~
P ), and the corresponding
augmented functional L on Y U 4 as follows:
Once again, the derivatives L 0 and L 00 will stand for derivatives with respect to (y; u; z)
and do not refer to the Lagrange multipliers (p; ). The same remark concerns L . Let
the current iterate of the ALSQP method, and consider the linear-quadratic
problem
(QP
Minimize f 0
subject to z (y n
The new iterate (y obtained by taking the solution (y
of (QP
exists), and the multipliers (p n+1 ; n+1 ) associated with the constraints
respectively. For we recover the
classical SQP method.
Let us also introduce the following problem:
QP
subject to y
The problems (QP
QP
are equivalent in the sense precised below.
Theorem 2 Let (y a solution of (QP
associated Lagrange multipliers
must solve the problem ( d
QP
n+1 ), and the
multiplier p n+1 is the solution to the equation
Moreover, z n+1 and n+1 must satisfy
z
Conversely, if (y n+1 ; u n+1 ) is a solution of ( d
QP
are dened by
{ (5:3), then (y n+1 ; u n+1 ; z n+1 ) is a solution to (QP
associated Lagrange
multipliers (p n+1 ; n+1 ).
Proof. Let us rst assume that (y n+1 ; u
To show that (y n+1 ; u n+1 )
solves ( d
QP
n+1 ) and that the relations (5.1){(5.3) are satised, we investigate the following:
Explicit form of (QP
We expand all derivatives occuring in the problem (QP
. Write for short k and introduce for convenience the functional g(y;
Having this, the objective to minimize in (QP
n+1 ) is given by
The minimization is subject to the constraints
z (y n
Reduction to ( d
QP
To reduce the dimension of the problem, we exploit the second
one of the equations (5.4): We insert the expression z z n 0 (y n )(y y
in the functional J . Then the second and fourth items in the denition of J are constant
with respect to (y; z; u). They depend only on the current iterate and can be neglected
during the minimization of J . The associated functional to be minimized is
~
Moreover, we can delete the second equation of (5.4) by inserting the expression for z in
the rst one. This explains why (y n+1 ; u n+1 ) is a solution of ( d
QP
Necessary optimality conditions. To derive the necessary conditions for the triplet
with the Lagrange functional
~
The conditions are ~
L u (u u n+1 ) 0, for all u 2 U ad . An evaluation yields
for We mention for later use, that the equations (5.4) belong to the optimality
system of (QP
too. The update formulas for p n+1 and n+1 follow from (5.5), (5.6).
We have shown one direction of the statement. The converse direction can be proved in
a completely analogous manner. If (y
QP
n+1 ), then we substitute z for
in the corresponding positions. Then it is easy to
verify that (y n+1 ; u subject to (5.4), and that n+1 is the multiplier
associated to the equation z (y n
Remark 2 The update rules (5:2) { (5:3) imply that the Lagrange multiplier coincides
with p during the iteration, while this is not necessarily true for the initial values of n
and p n . Therefore, with possible exception of the rst step, up to a constant, the objective
functional of ( d
QP
n+1 ) is
~
This easily follows by calculating L 00 (y from the formula (4:6). Moreover, we are
justifed to replace n by p n in the variational equation (5:1).
Theorem 2 shows that the iterates of the ALSQP method can be obtained by solving
the reduced problem ( d
QP
solutions of (QP
exist. This question of
existence, can be answered by considering ( d
QP
Theorem 3 Let (y;
p) satisfy the assumptions of Lemma 1 and let
suciently small, then ( d
QP
n+1 ) has a unique solution (y
Moreover, (y being dened by (5:3)) is the unique solution of
(QP
Proof. Assume that k(y;
us prove the existence
for ( d
QP
In view of the remark above, the functional ~
J can be taken instead of J for
the minimization in ( d
QP
Its quadratic part is
where ~
tends to
in U , since z n (y
z yields that the objective functional of ( d
QP
n+1 ) is coercive on the
set ~
hence it is strictly convex there. The set
U ad is non-empty, bounded, convex, and closed in U , and in U 2 as well. We have assumed
in (A5) that ( is continuous from U 2 to Y 2 at all y 2 Y , in particular at
Therefore, ~
C is non-empty, convex, closed, and bounded in Y 2 U 2 . Now existence
and uniqueness of a solution (y
QP
n+1 ) are standard conclusions.
Moreover, U ad U , hence u n+1 2 U , and the regularity properties of (
guarantee that y n+1 2 Y . Further, z n+1 2 U follows from (5.3). Existence and uniqueness
for (QP
are obtained from Theorem 2. 2
The update rules of Theorem 2 show that (p n+1 ; n+1 ) is uniquely determined in U 2 U 2 .
We get even better regularity:
Corollary 1 If the initial element (y is taken from Y U 4 , then the iterates
generated by the ALSQP method are uniquely determined and belong
to Y U 4 .
Proof. Existence and uniqueness follows from the last theorem and the update rules (5.2){
(5.3). We also know that (y n+1 ; u . The only new result we have to
derive is that (p n+1 ; n+1 ) remains in U U as well. Since we have to
verify p n+1 2 U . This, however, follows instantly from the equation (5.1): We know that
belong to b
Y (assumptions (A5), (A6),
(A7)). Moreover, the same holds for ( 00 (y n )(y n+1 y n
Therefore, (A5) ensures the solution p n+1 of (5.1) to be in U . 2
5.2 Newton method for the optimality system of (P )
The augmented SQP method can be considered as a computational algorithm to solve the
rst order optimality system of (P ) by the generalized Newton method. This equivalence
will be our tool in the convergence analysis. The optimality system for (P ) consists of the
equations (L (w))
z
for the unknown variable The optimality system (5.8) of (P ) is equivalent
to a generalized equation. To see this, let us rst introduce the following set-valued
mappings:
Y
f0 U g N(u) f0 U g f0 U
and consider F
Y U 4 dened by
f u (y; u) p
z (y)C C C C C C C C C C A
Notice that N(u) has a closed graph in U U . It is the restriction to U of the normal cone
at U ad in the point u. (For the denition of the normal cone, we refer to [5].) In the rst
component of F , due to (A6), we identify with the element
which belongs to ^
Y . With (A5) and (A6), we can
easily verify that F takes values in ^
Y U 4 .
Lemma 3 The optimality system (5:8) of (P ) is equivalent to the generalized equation
Proof. By calculating the derivatives of L in (5.8), we easily verify that:
(L (w)) y
(L (w)) z
(L (w)) u
z (y)C C C C C C A
Therefore, by the denition of F , (5.10) is equivalent to
The third relation can be rewritten as:
This is just the variational inequality of (5.8), and the equivalence of (5.8) and (5.10) is
veried. 2
Next we recall some facts about generalized equations and related convergence results
for the Generalized Newton Method (GNM). Let W and E be Banach spaces, and let O
be an open subset of W. Let F be a dierentiable mapping from O into E , and T be a
set-valued mapping from O into P(E) with closed graph. Consider the generalized equation
The generalized Newton method for (5.11) consists in the following algorithm:
Choose a starting point
For the solution to the generalized equation:
The generalized Newton method is locally convergent under some assumptions stated below
Equation (5.11) admits at least one solution
!.
There exist constants ~ r(!) and ~ c(!) such that BW (!; ~
r(!)).
Denition 1 The generalized equation is said to be strongly regular at ! 2 O, if there
exist constants r(! ) and c(! ), such that, for all )), the perturbed generalized
equation
has a unique solution S(!
for all
The theorem below is a variant of Robinson's implicit function theorem ([21], Theorem
2.1).
Theorem 4 ([4], Theorem 2:5) Assume that (5.11) is strongly regular at some ! 2 O, and
that (C1) and (C2) are fullled. Then there exist (!) > 0, k(!) > 0, and a mapping S 0
from BW (!; (!)) O into BW (!; (!)) such that, for every ! 2 BW (!; (!)), S 0 (! ) is
the unique solution to (5:13), and
The following theorem is an extension to the generalized equation (5.11) of the well known
Newton-Kantorovitch theorem. It is a direct consequence of Theorem 4.
Theorem 5 ([4], Theorem 2:6) Assume that the hypotheses of Theorem 4 are fullled.
Then there exists ~
(!) > 0 such that, for any starting point
(!)), the generalized
Newton method generates a unique sequence (! k ) k convergent to
!, and satisfying
W for all k 1:
We apply these results to set up the generalized Newton method for the generalized equation
(5.10), which is the abstract formulation of the optimality system of (P ).
Lemma 4 The generalized Newton method for solving the optimality system of (P ), dened
by (5:12), proceeds as follows: Let w be the current
iterate. Then the next iterate w is the solution
of the following generalized equation for
Proof. This iteration scheme is a conclusion of the iteration rule (5.12) applied to the
concrete choice of (5.9) for F . The computations are straightforward. We should only
mention the following equivalent transformation, which nally leads to (5.14), (5.15): Due
to the concrete expression for F given in (5.12), the rst two relations in
are
Inserting (5.18) in (5.19), (5.20) we obtain (5.14), (5.15). 2
To apply Theorem 5 to the concrete generalized equation (5.10), we need that (5.10) be
strongly regular at
and that conditions (C1) and (C2) be satised. The assumption
of strong regularity at
w must be assumed here. It has to be checked for each particular
application. In general, the verication of strong regularity requires a detailed analysis. In
the case of the optimal control of parabolic partial dierential equations, we refer to the
discussion of the SQP method in Troltzsch [23]. The strong regularity of an associated
generalized equation was proved there by means of a result on L 1 -Lipschitz stability from
[22]. The associated semilinear elliptic case was studied by Unger [24].
The conditions (C1) and (C2) can be veried with assumptions (A6) and (A7).
Lemma 5 The mapping w 7! F (w) is of class C 1;1 from Y U 4 into b
Y U 4 .
Proof. This statement is an immediate consequence of (A6) and (A7). 2
Theorem 6 Let (y;
u) be a local solution of (P ), and let
p be the associated adjoint state.
Assume that the generalized equation: Find (y; u; p) 2 Y U 2 such that
be strongly regular at (y;
p). Then the generalized equation
Find
is strongly regular at
p.
Proof. Let z ) be a perturbation in b
Y U 4 . The linearized generalized
equation for (5.22) at the point
associated with the perturbation e, is
f yy (y
(z
e
f uy (y
where
f yy stands for f yy (y;
u), and the same notations is used for the other mappings. To
obtain the two rst equations of (5.23), we refer to the system (5.19), (5.20), where we
insert
w and replace the left hand side by the perturbation.
Since
z
by straightforward calculations, we can easily prove that the
system (5.23) is equivalent to
f yy (y
e
f uy (y
Now we observe that the rst, third, and fourth relation of (5.24) form a subsystem for
which does not depend on (z; ). Once (y; u; p) is given from this subsystem,
(z; ) is uniquely determined by the remaining two equations. Let us set ~
e
with
~
The subsystem of (5.24) can be rewritten in the form of the generalized equation
f uy (y
The generalized equation (5.26) is the linearization of the generalized equation (5.21) at
p), associated with the perturbation ~ e. Since (5.21) was assumed to be strongly
regular at (y;
p), there exist ~ r r(y;
p) > 0, and a mapping S from
U , such that S(~e) is the unique solution to (5.26) for all ~ e
r),
and
U . Now, we show that (5.22) is strongly regular at
w. For any e, let ~
e be given by (5.25). Then
and there exists
r > 0 such that ~ e belongs to
(0;
r). Dene a
mapping
S from B b
(0;
r) into b
where
ce z S 3 (~e):
Then
S(e) is clearly the unique solution to (5.23). We can easily nd c > 0 such that
. The proof is complete. 2
Theorem 6 shows that once the convergence analysis for the standard non augmented
Lagrange-Newton-SQP method has been done by proving strong regularity of the associated
generalized equation, this analysis does not have to be repeated for analyzing convergence
of the augmented method.
Up to now, we have discussed the Augmented SQP method and the Generalized Newton
method separately. Now we shall show that both methods are equivalent. This equivalence
is used to obtain a convergence theorem for the augmented SQP method.
Theorem 7 Let (y; u) a local solution of (P ), which satises together with the associated
Lagrange multiplier
p the second order sucient optimality condition (SSC). Dene
suppose that the generalized equation (5:21) is
strongly regular at
w. Then there exists
w) > 0 such that, for any starting point
in the neighbourhood BW (
w; r), the ALSQP method dened according to
Theorem 2 and the generalized Newton method dened in Lemma 4 generate the same sequence
of iterates (w n . Moreover, there is a constant c q (
w) such
that the estimate
is satised for all
Proof. First we should mention the simple but decisive fact that
w satises the optimality
system of (P ), since (y;
p) has to satisfy the optimality system for (P). There-
fore, it makes sense to determine
w by the generalized Newton method. Let w
be an arbitrary current iterate, which is identical for the ALSQP method
and the generalized Newton method.
In the GNM, w n+1 2 W is found as the unique solution of (5.14){(5.18). As concerns the
ALSQP method, (y obtained as the unique solution of ( d
QP
are determined by (5.2). Therefore, (y
satises the associated optimality system (5.4), (5.5)-(5.7) which is obviously identical with
(5.14)-(5.18). It is clear that both the methods deliver the same new iterate w n+1 2 W .
All remaining statements of the theorem follow from the convergence Theorem 5. 2
6 Numerical results
6.1 Test example
We apply the augmented SQP method to the following one-dimensional nonlinear parabolic
control problem with Stefan-Boltzmann boundary condition:
Z( a y (t)
subject to
u a u(t)
This example is a particular case of problem (E) considered in Section 3, where we
take
(0; ') and make an associated modication of the boundary condition. In an early phase of
this work, we studied the numerical behaviour of the SQP method without augmentation.
Here, we compare both methods. We performed our numerical tests for the following
particular data:
a y
Lemma 6 The pair (y; u) dened by
e 2=3 e 1=3
is a locally optimal solution for (6:27) in C([0; '][0; T ])L 1 (0; T ). The associated adjoint
state (Lagrange multiplier) is given by
cos(x). The triplet (y;
the second order sucient optimality condition (SSC).
Proof. The proof is split into four steps.
Step 1. State equation. It is easy to see that
Now regard the boundary condition at ': The left hand side is
The same holds for the right hand side, since
Step 2. Adjoint equation. Again, the equations
are easy to check. It remains to verify the boundary condition at
It is obvious that
'. The right hand side of the boundary condition has
the same value, since
a y (t)
Step 3. Variational inequality. We must verify that
{ which is trivial { and that
a
It is well known that this holds if and only if
(a
e 2=3 e 1=3
where P [0;1] denotes projection onto [0; 1]. This is obviously veried.
Step 4. Second order sucient condition. The Lagrange function is given by
R
R l
R Ty x (0; t)p(0;
R T(y x (';
R T
Therefore,
Since
p is negative, L 00 (y;
p) is coercive on the whole space Y U , hence (SSC) is
Theorem 8 The pair (y;
u) is a global solution of (E).
Proof. Let (y; u) be any other admissible pair for (E). Due to the rst order necessary
condition, we have
p)(y
u)2
Z
Z
>From the positivity of
p and of ' 00 (y (independently of s and y), it follows
that f(y; u) f(y;
u). 2
Next we discuss the strong regularity of the optimality system at (y;
p).
Theorem 9 The optimality system of (E) is strongly regular at (y;
p).
Proof. The triplet (y;
p) satises (SSC). Moreover, (E) ts into a more general class
of optimal control problems for semilinear parabolic equations, which was considered in
[23]. It follows from Theorem 5.2 in [22], and Theorem 5.3 in [23] that (SSC) ensures
the strong regularity of the generalized equation being the abstract formulation of the
associated optimality system. We only have to apply this result to problem (6.27). 2
Remark 3 A study of [23] reveals that convergence of the standard SQP method can be
proved for arbitrary dimension
of
assuming a weaker form of (SSC). It requires coercivity
of L 00 only on a smaller subspace that considers strongly active control constraints.
This weaker assumption should be helpful for proving the convergence of the augmented
SQP method as well. We shall not discuss this, since the technical eort will increase
considerably.
Now we obtain from Theorem 7 the following result:
Corollary 2 The Augmented Lagrangian SQP method for (E) is locally quadratically convergent
towards (y;
p).
6.2 Algorithm
For the convenience of the reader, let us consider the problem ( d
QP
corresponding to
our test example. After simplifying we get
Minimize2
subject to
(6.
with
One specic diculty for solving problem (6:27)-(6:29) is partially related to the control
constraints. But the main diculty appears also in the unconstrained case where a (large)
linear system has to be solved. Let us consider for a moment the unconstrained case. If
solution of problem (6:27)-(6:28), then the optimal triplet
satises (6:28), the adjoint equation
and
(a
In practice, we solve ( d
QP
discretization of its optimality system. The result is taken to
solve ( d
QP
). The discretized version of equation (6.31) corresponds to a large-scale linear
system. To solve this system, we need the solutions corresponding to the discretization
of two coupled parabolic equations (the state and the adjoint equations). It is clear that
the accuracy of the Augmented Lagrangian SQP-method depends on the one for solving
the linear system, and consequently on the numerical methods for the partial dierential
equations. In our example, the state and adjoint equations are solved by using a second-order
nite dierence scheme (Cranck-Nicholson scheme) appropriately modied at the
boundary to maintain second order approximation. The linear system is solved by using
the CGM (conjugate gradient method), with a step length given by the Polak-Ribiere
formula.
Let us now take into account the constraints (6.29). The optimality condition (6.31) is
replaced by
(a
(a
The management of these restrictions is based on (6.32) and on an projection method
by Bertsekas [7]. (See also [9] and [10] where this method is successfully applied.) More
precisely, we have the following algorithm:
be the vector representing the iterate corresponding to
xed grid. Let " and be xed positive numbers, and let I = f1; ; mg
be the index set associated to w n . (m is the dimension of the vector w n and depends
on the discretization of u n )
and denote by d
the vector representing
the iterate corresponding to the solution of (6.30).
the sets of strongly active inequalities
I
I
where A
is the vector representing a u .
n for all j 2 I
a [ I
b .
Solve the unconstrained problem (6.27)-(6.28) for w j
a [ I
remaining components are xed due to 4.) Denote by v n the vector representation of
the solution.
denotes the projection onto [u a ;
and go to 2. Otherwise stop
the iteration.
6.3 Numerical tests
In the numerical tests, we focused our interest on the aspects concerning the convergence
for dierent values of initial data and penalty parameters , and on the rate of convergence.
The programs were written in MATLAB.
Let us rst summarize some general observations.
In our example, the augmented Lagrangian algorithm performed well. In particular, the
graphs of the exact solution and that of the numerical solution are (almost) identical.
When compared with the SQP method (corresponding to = 0), the augmented Lagrangian
SQP has the advantage of a more global behavior. Moreover, it is less sensitive
to the start-up values, and is signicantly faster than the SQP method for some points.
Graphical correction of the computed controls and precisionof optimal value (up to ve
digits) are obtained by taking the discretization parameters with respect to the time and
the space equal to 200.
For xed data, the number of iterations for the CGM and the Augmented SQP turned
out to be independent of the mesh size.
In all the sequel, we set
ku
ku
where (u;
z) and are the vectors respectiveely corresponding
to the exact solution of (E), the numerical solution of (E), and the solution of ( d
QP
Moreover, we denote by n t and n x the discretization parameters with respect to the time
and the space. Optimal controls were determined for the following pairs (n x
(200,200), (400,400).
Run 1. (SQP method.) The rst test corresponds to
The rates for e n , n u , n p , and n z are given in Table 1.
Table
1:
100 1.7782e-06 2.5347e-06 1.6610e-06 0.2372 0.3653 1.0575 1.3886
200 1.3725e-06 2.9337e-06 1.0724e-06 0.2472 0.3663 1.0585 0.9980
The SQP method shows a good convergence for this initial point. 4 iterations were needed
to get the result.
Run 2. (ALSQP method.) The second test corresponds to the point
(0:5; 0:5; 0:5), with z
Table
2:
100 1.5391e-06 2.6824e-06 1.5013e-06 1.4459e-06 0.0150 1.0004 2.2378
200 1.2318e-06 3.8783e-07 5.2251e-07 5.5521e-07 0.0149 0.9256 1.0015
The ALSQP method has a very good convergence for this choice. Convergence could always
be achieved by xing using other values of z 0 and . However, the number
of iterations and the speed of the method depend on these choices. As shown in Table 3,
three iterations for the ALSQP method were needed, instead of four for the SQP method.
The number of iterations for the CGM, the SQP and the ALSQP methods is independent
of the mesh-size. The exact value for the cost functional is
In
Table
3, we give
the values of the cost functional corresponding to the dierent steps for
Table
3:
SQP method
Iter f n CGM iter
ALSQP method
Iter f n CGM iter
u2 u1
u3, u4
u2, u3
Figure
1: Controls for Run 1 and Run 2
Figure
2: States y('; t) for Run 1 and Run 2
p3, p4
p2, p3
Figure
3: Adjoint states p('; t) for Run 1 and Run 2
In
Figures
1, 2, and 3, we compare the behavior of the control, the state, and the adjoint
state obtained by taking 1. It is clear that in the case of the
ALSQP method, the second iteration gives a good approximation to the optimal control,
the optimal state, and the optimal adjoint state.
Run 3. The last test corresponds to the initial point given by
Table
4:
100 9.9421e-06 2.7989e-06 4.0979e-06 3.0853e-06
200 1.1864e-05 4.7523e-06 5.0999e-06 2.3842e-06
400 1.2167e-05 5.2817e-06 5.3307e-06 2.2146e-06
200 0.0404 0.3975 1.3537 1.1471
For this initial point, the SQP method (corresponding to does not converge, while
the ALSQP method converges for many choices of z 0 . In our tests, the point which gives
the best result is given by z 1. For this choice, 4 iterations are needed
with 2, 5, 6 and 9 CG steps. The dierents rates are given in Table 4, and the behavior of
the solution is shown is Figure 4.
Remark 4 The numerical results stated in Table 1, 2, and reft3 were obtained for a xed
mesh-size (xed grid). However, we also implemented the ALSQP method with adaptative
mesh size, i.e. we started with a coarse grid and used the obtained results as startup values
u2
u3, u4
p3,
Figure
4: Controls, states, and adjoint states for Run 3
for the next ner grid. This method is signicantly faster, and delivers essentially the same
results.
--R
A Lagrange multiplier theorem for control problems with state constraints
The Lagrange-Newton method for in nite-dimensional optimization prob- lems
The Lagrange Newton method for in
Discretization and mesh independence of Newton's method for generalized equations.
Analysis and Control of Nonlinear In
Augmented Lagrangian techniques for elliptic state constrained optimal control problems
Projected Newton methods for optimization problems with simple constraints
Pontryagin's principle for state-constrained boundary control problems of semilinear parabolic equations
Numerical solution of a constrained control problem for a phase
Augmented Lagrangian-SQP methods for nonlinear optimal control problems of tracking type
Augmented Lagrangian-SQP methods in Hilbert spaces and application to control in the coecients problems
Augmented Lagrangian-SQP techniques and their approxi- mations
"Linear and quasilinear equations of parabolic type"
First and second order su
Hamiltonian Pontryagin's principles for control problems governed by semilinear parabolic equations
Strongly regular generalized equations.
Hinreichende Optimalit
Distributed control problems for the Burgers equation.
--TR
Multiplier methods for nonlinear optimal control
Second-order sufficient optimality conditions for a class of nonlinear parabolic boundary control problems
Augmented Lagrangian--SQP Methods for Nonlinear OptimalControl Problems of Tracking Type
Augemented Lagrangian Techniques for Elliptic State Constrained Optimal Control Problems
Pontryagin''s Principle for State-Constrained Boundary Control Problems of Semilinear Parabolic Equations
On the Lagrange--Newton--SQP Method for the Optimal Control of Semilinear Parabolic Equations
Mesh-Independence for an Augmented Lagrangian-SQP Method in Hilbert Spaces
Distributed Control Problems for the Burgers Equation
--CTR
Hans D. Mittelmann, Verification of Second-Order Sufficient Optimality Conditions for Semilinear Elliptic and Parabolic Control Problems, Computational Optimization and Applications, v.20 n.1, p.93-110, October 2001 | optimal control;two-norm discrepancy;control constraints;generalized equation;semilinear parabolic equation;augmented Lagrangian SQP method in Banach spaces;generalized Newton method |
606895 | Large-Scale Active-Set Box-Constrained Optimization Method with Spectral Projected Gradients. | A new active-set method for smooth box-constrained minimization is introduced. The algorithm combines an unconstrained method, including a new line-search which aims to add many constraints to the working set at a single iteration, with a recently introduced technique (spectral projected gradient) for dropping constraints from the working set. Global convergence is proved. A computer implementation is fully described and a numerical comparison assesses the reliability of the new algorithm. | Introduction
The problem considered in this paper consists in the minimization of a
smooth with bounds on the variables. The feasi-
Department of Computer Science IME-USP, University of S~ao Paulo, Rua do Mat~ao
1010, Cidade Universitaria, 05508-090, S~ao Paulo SP, Brazil. This author was supported
by PRONEX-Optimization 76.79.1008-00, FAPESP (Grants 99/08029-9 and 01/04597-4)
and CNPq (Grant 300151/00-4). e-mail: egbirgin@ime.usp.br
y Department of Applied Mathematics IMECC-UNICAMP, University of Campinas,
This author was supported by PRONEX-
Optimization 76.79.1008-00, FAPESP (Grant 01/04597-4), CNPq and FAEP-UNICAMP.
e-mail: martinez@ime.unicamp.br
ble
set
is dened
by
Box-constrained minimization algorithms are used as subalgorithms for solving
the subproblems that appear in many augmented Lagrangian and penalty
methods for general constrained optimization. See [11, 12, 16, 17, 18, 19,
20, 21, 26, 28, 31]. A very promising novel application is the reformulation
of equilibrium problems. See [1] and references therein. The methods introduced
in [11] and [26] are of trust-region type. For each iterate x k 2
a quadratic approximation of f is minimized in a trust-region box. If the
objective function value at the trial point is su-ciently smaller than f(x k ),
the trial point is accepted. Otherwise, the trust region is reduced. The
dierence between [11] and [26] is that, in [11], the trial point is in the face
dened by a \Cauchy point", whereas in [26] the trial point is obtained
by means of a specic box-constrained quadratic solver, called QUACAN.
See [2, 15, 23, 25, 31] and [13] (p. 459). Other trust-region methods for
box-constrained optimization have been introduced in [3, 29].
QUACAN is an active-set method that uses conjugate gradients within
the faces, approximate internal-face minimizations, projections to add constraints
to the active set and an \orthogonal-to-the-face" direction to leave
the current face when an approximate minimizer in the face is met. In [17]
a clever physical interpretation for this direction was given.
Numerical experiments in [16] suggested that the e-ciency of the algorithm
[26] relies, not in the trust-region strategy, but in the strategy of
QUACAN for dealing with constraints. This motivated us to adapt the
strategy of QUACAN to general box-constrained problems. Such adaptation
involves two main decisions. On one hand, one needs to choose an
unconstrained minimization algorithm to deal with the objective function
within the faces. On the other hand, it is necessary to dene robust and
e-cient strategies to leave faces and to add active constraints. Attempts for
the rst decision have been made in [6] and [10]. In [10] a secant multipoint
minimization algorithm is used and in [6] the authors use the second-order
minimization algorithm of Zhang and Xu [36].
In this paper we adopt the leaving-face criterion of [6], that employs the
spectral projected gradients dened in [7, 8]. See, also, [4, 5, 32, 33, 34].
For the internal minimization in the faces we introduce a new general algorithm
with a line search that combines backtracking and extrapolation.
The compromise in every line-search algorithm is between accuracy in the
localization of the one-dimensional minimizer and economy in terms of functional
evaluations. Backtracking-like line-search algorithms are cheap but,
sometimes, tend to generate excessively small steps. For this reason, back-tracking
is complemented with a simple extrapolation procedure here. The
direction chosen at each step is arbitrary, provided that an angle condition
is satised.
In the implementation described in this paper, we suggest to choose
the direction using the truncated-Newton approach. This means that the
search vector is an approximate minimizer of the quadratic approximation
of the function in the current face. We use conjugate gradients to nd
this direction, so the rst iterate is obviously a descent direction, and this
property is easily monitorized through successive conjugate gradient steps.
The present research is organized as follows. In Section 2 we describe an
\unconstrained" minimization algorithm that deals with the minimization
of a function on a box. The algorithm uses the new line-search technique.
Due to this technique it is possible to prove that either the method nishes
at a point on the boundary where, perhaps, many constraints are added,
or it converges to a point in the box where the gradient vanishes. The
box-constrained algorithm is described in Section 3. Essentially, we use
the algorithm of Section 2 to work within the \current face" and spectral
projected gradients [7] to leave constraints. The spectral projected gradient
technique also allows one to leave many bounds and to add many others to
the working set at a single iteration. This feature can be very important
for large-scale calculations. In this section we prove the global convergence
of the box-constrained algorithm. The computational description of the
code (GENCAN) is given in Section 4. In Section 5 we show numerical
experiments using the CUTE collection. In Section 6 we report experiments
using some very large problems (up to 10 7 variables). Finally, in Section 6
we make nal comments and suggest some lines for future research.
In this section we assume that f : IR
ug:
The set B will represent each of the closed faces
of
in Section 3. The
dimension n in this section is the dimension of the reduced subspace of the
Section 3 and the gradient of this section is composed by the derivatives
with respect to free variables in Section 3. We hope that using the notation
rf in this section will not lead to confusion.
From now on, we denote
Our objective is to dene a general iterative algorithm that starts in the
interior of B and, either converges to an unconstrained stationary point, or
nishes in the boundary of B having decreased the functional value. This
will be the algorithm used \within the faces" in the box-constrained method.
Algorithm 2.1 is based on line searches with Armijo-like conditions and
extrapolation. Given the current point x k and a descent direction d k , we
nish the line search if x k +d k satises a su-cient descent criterion and if the
directional derivative is su-ciently larger than hg(x k ); d k i. If the su-cient
descent criterion does not hold, we do backtracking. If we obtained su-cient
descent but the increase of the directional derivative is not enough, we try
extrapolation.
Let us explain why we think that this philosophy is adequate for large-scale
box-constrained optimization.
1. Pure backtracking is enough for proving global convergence of many
optimization algorithms. However, to accept the rst trial point when
it satises an Armijo condition can lead to very small steps in critical
situations. Therefore, steps larger than the unity must be tried when
some indicator says that this is worthwhile.
2. If the directional derivative su-ciently larger than
we consider that there is not much to decrease increasing
the steplength in the direction of d k and, so, we accept the unit
steplength provided it satises the Armijo condition. This is reasonable
since, usually, the search direction contains some amount of
second-order information that makes the unitary steplength desirable
from the point of view of preserving a satisfactory order of convergence.
3. If the unitary steplength does not satisfy the Armijo condition, we
do backtracking. In this case we judge that it is not worthwhile to
compute gradients of the new trial points, which would be discarded
if the point is not accepted.
4. Extrapolation is especially useful in large-scale problems, where it is
important to try to add as many constraints as possible to the working
set. So, we extrapolate in a rather greedy way, multiplying the
steplength by a xed factor while the function value decreases.
5. We think that the algorithm presented here is the most simple way
in which extrapolation devices can be introduced with a reasonable
balance between cost and e-ciency. It is important to stress that this
line search can be coupled with virtually any minimization procedure
that computes descent directions.
For all z 2 IR n , the Euclidean projection of z onto a convex set S will be
denoted P S (z). In this section, we denote P (y). The symbol k k
represents the Euclidean norm throughout the paper.
Algorithm 2.1: Line-search based algorithm
The algorithm starts with x 0 2 Int(B). The non-dimensional parameters
are given.
We also use the small tolerances abs ; rel > 0. Initially, we set k 0.
Step 1. Computing the search direction
Step 1.1 If kg k
Step 1.2 Compute such that
Step 2. Line-search decisions
Step 2.1 Compute
set minf
then go to Step 2.2
else go to Step 2.3.
Step 2.2 (At this point we have x k
If
take and go to Step 5
else go to Step 3 (Extrapolation)
else go to Step 4 (Backtracking).
Step 2.3 (At this point we have x k
If
take k max and x
such that f(x k+1 ) and go to Step 5
(In practice, such a point is obtained performing Step 3
of this algorithm (Extrapolation).)
else go to Step 4 (Backtracking).
Step 3. Extrapolation
Step 3.1 If ( < max and N > max ) then set trial max
else set trial N.
Step 3.2 If ( max and kP
take
the execution of Algorithm 2.1.
Step 3.3 If (f(P
take to Step 5
else set trial and go to Step 3.1.
Step 4. Backtracking
Step 4.1 Compute new .
Step 4.2 If (f(x k
take and go to Step 5
else go to Step 4.1.
Step 5. If k max terminate the execution of Algorithm 2.1
else set to Step 1.
Remarks. Let us explain here the main steps of Algorithm 2.1 and their
motivations. The algorithm perform line-searches along directions that satisfy
the angle-cosine condition (2). In general, this line search will be used
with directions that possess some second-order information, so that the \nat-
ural" step must be initially tested and accepted if su-cient-descent
and directional-derivative conditions ((3) and (4)) are satised.
The rst test, at Step 2.1, asks whether x k is interior to the box. If
this is not the case, but f(x k we try to obtain smaller
functional values multiplying the step by a xed factor and projecting onto
the box. This procedure is called \Extrapolation". If x k is not interior
and backtracking.
is interior but the Armijo condition (3) does not hold, we
also do backtracking. Backtracking stops when the Armijo condition (6)
is fullled. If (3) holds, we test the directional derivative condition (4).
As we mentioned above, if (4) is satised too, we accept x k new
point. However, if (3) holds and (4) does not, we judge that, very likely,
taking larger steps along the direction d k will produce further decrease of
the objective function. So, in this case we also do Extrapolation.
In the Extrapolation procedure we try successive projections of x k +d k
onto the box, with increasing values of . If the entry point
interior but x k +Nd k is not, we make sure that the point x will be
tested rst. The extrapolation nishes when decrease of the function is not
obtained anymore or when the distance between two consecutive projected
trial points is negligible.
The iteration of Algorithm 2.1 nishes at Step 5. If the corresponding
iterate x k+1 is on the boundary of B, the algorithm stops, having encountered
a boundary point where the functional value decreased with respect
to all previous ones. If x k+1 is in the interior of B the execution continues
increasing the iteration number.
The
ux-diagrams in Figures 1 and 2 help to understand the structure
of the line-search procedure.
x+d -Int aa-amax
Line Search
bb-condition
xnew-x+d
End
Armijo
Backtracking
Extrapolation
Figure
1: Line Search procedure.
Extrapolation
a - aamax and
dist < ee
aa-aatrial
aatrial-NNaa
a < aamax and
NNa > aamax
f(P(x+aatrial d))
xnew-P(x+aad)
End
Figure
2: Extrapolation strategy.
In the following theorem we prove that any sequence generated by Algorithm
2.1, either stops at an unconstrained stationary point, or stops in the
boundary of B, or generates, in the limit, unconstrained stationary points.
Theorem 2.1. Algorithm 2.1 is well dened and generates points with
strictly decreasing functional values. If fx k g is a sequence generated by
Algorithm 2.1, one of the following possibilities holds.
(i) The sequence stops at x k , with g(x k
(ii) The sequence stops at x
(iii) The sequence is innite, it has at least one limit point, and every limit
point x satises g(x
Proof. Let us prove rst that the algorithm is well dened and that it
generates a sequence with strictly decreasing function values. To see that
it is well dened we prove that the loops of Steps 3 and 4 necessarily nish
in nite time. In fact, at Step 3 we multiply the nonnull direction d k by a
number greater than one, or we take the maximum allowable feasible step.
Therefore, eventually, the boundary is reached or the increase condition (5)
is met. The loop of Step 4 is a classical backtracking loop and nishes
because of well-known directional derivative arguments. See [14]. On exit,
the algorithm always requires that f(x k so the sequence
strictly decreasing.
It remains to prove that, if neither (i) nor (ii) hold, then any cluster
point x of the generated sequence satises g(x be an innite
subset of IN such that
lim
Suppose rst that ks k k is bounded
away from zero for k 2 K 1 . Therefore, there exists > 0 such that ks k k
for all k 2 K 1 . By (3) or (6) we have that
for all k 2 K 1 . Therefore, by (2),
By the continuity of f this implies that lim k2K 1
Suppose now that ks k k is not bounded away from zero for k 2 K 1 . So,
there exists K 2 , an innite subset of K 1 , such that lim k2K 2
ks
be the set of indices such that k is computed at Step 2.2 for
Analogously, let K 4 K 2 be the set of indices such that k is
computed at Step 3 for all k 2 K 4 and let K 5 K 2 be the set of indices such
that k is computed at Step 4 for all k 2 K 5 . We consider three possibilities:
(i) K 3 is innite.
(ii) K 4 is innite.
(iii) K 5 is innite.
Consider, rst, the case (i). By (4) we have that
and
ks
ks
for all k 2 K 3 . Since K 3 is innite, taking an convergent subsequence
taking limits in (7) and using continuity, we obtain that
Since 2 (0; 1), this implies that hg(x ); di 0. But, by (2) and continuity,
Consider, now, Case (iii). In this case, K 5 is innite. For all k 2 K 5
there exists s 0
k such that
and
ks 0
ks k
By (10), lim ks 0
by (9), we have for all k 2 K 5 ,
So, by the Mean-Value theorem, there exists k 2 [0; 1] such that
for all k 2 K 5 . Dividing by ks 0
k k, taking limits for a convergent subsequence
d) we obtain that
This inequality is similar to (8). So, g(x follows from the same
arguments.
Consider, now, Case (ii). Since we are considering cases where an innite
sequence is generated it turns out that, in (5), P
Moreover, by Step 3.1, trial N and P
Therefore, for all k 2 K 4 , writing 0
we have that 0
and
Therefore, by the Mean-Value theorem, for all k 2 K 4 there exists k 2
k ] such that
Thus, for all k 2 K 4 , since 0
we have that
dividing by kd k k and taking a convergent subsequence of d k =kd k k, we obtain:
hg(x ); di 0:
But, by (2), taking limits we get hg(x ); di kg(x )k. This implies that
This completes the proof. 2
3 The box-constrained algorithm
The problem considered in this section is
Minimize f(x) subject to x
where
is given by (1).
As in [23], let us divide the feasible
set
into disjoint open faces, as
follows. For all I
We I the smallest a-ne subspace that contains F I and S I the
parallel linear subspace to V I . The (continuous) projected gradient at xis dened as
For all x 2 F I , we dene
I
[g P (x)]:
The main algorithm considered in this paper is described below.
Algorithm 3.1: GENCAN
Assume that x 0is an arbitrary initial point, 2 (0; 1) and 0 < min
I be the face that contains the current iterate x k . Assume
that g P (otherwise the algorithm terminates). At the main iteration
of the algorithm we perform the test
kg I
If (13) takes place, we judge that it is convenient that the new iterate belongs
to
F I (the closure of F I ) and, so, we compute x k+1 doing one iteration of
Algorithm 2.1, with the set of variables restricted to the free variables in F I .
So, the set B of the previous section corresponds to
F I here.
If (13) does not hold, we decide that some constraints should be abandoned
and, so, the new iterate x k+1 is computed doing one iteration of the SPG
method described by Algorithm 3.2. In this case, before the computation of
x k+1 we compute the spectral gradient coe-cient k in the following way.
Otherwise, dene
and
Algorithm 3.2 is the algorithm used when it is necessary to leave the
current face, according to the test (13).
Algorithm 3.2: SPG
Compute as the next iterate of a monotone SPG iteration [7, 8] with
the spectral step k . Namely, we dene the search direction d k as
and we compute x in such a way that
trying rst perhaps, reducing this coe-cient by means of a
safeguarded quadratic interpolation procedure.
Remark. Observe that x
F I if x k 2 F I and x k+1 is computed by Algorithm
3.2. In this case, (13) does not hold, so kg P
the components corresponding to the free variables of g I
are the same, this means that g P components corresponding
to xed variables. Therefore,
F I for all > 0. So,
F I for all > 0. But, according to the SPG iteration,
for some > 0, 0 > 0. This implies that x
F I .
We nish this section giving some theoretical results. Roughly speaking,
we prove that the algorithm is well dened and that a Karush-Kuhn-Tucker
point is computed up to an arbitrary precision. Moreover, under dual-
nondegeneracy, the (innite) algorithm identies the face to which the limit
belongs in a nite number of iterations.
Theorem 3.1. Algorithm 3.1 is well dened.
Proof. This is a trivial consequence of the fact that Algorithm 2.1 and Algorithm
3.2 (the SPG algorithm [7]) are well dened. 2
Theorem 3.2. Assume that fx k g is generated by Algorithm 3.1. Suppose
that there exists k 2 f0; I for all k
k. Then,
every limit point of fx k g is rst-order stationary.
Proof. In this case, x k+1 is computed by Algorithm 2.1 for all k k. Thus,
by Theorem 2.1, the gradient with respect to the free variables tends to zero.
By a straightforward projection argument, it follows that kg I
Since (13) holds, this implies that kg P every limit point is
rst-order stationary. 2
Theorem 3.3. Suppose that for all k 2 f0; I , there exists
such that x k
I . Then, there exists a limit point of fx k g that is
rst-order stationary.
Proof. See Theorem 3.3 of [6]. 2
Theorem 3.4. Suppose that all the stationary points of (12) are nondegen-
erate. ( @f
the hypothesis of Theorem
3.2 (and, hence, its thesis) must hold.
Proof. See Theorem 3.4 of [6]. 2
Theorem 3.5. Suppose that fx k g is a sequence generated by Algorithm 3.1
and let " be an arbitrary positive number. Then, there exists k 2 f0;
such that kg P
Proof. This result is a direct consequence of Theorems 3.2 and 3.3. 2
Implementation
At iteration k of Algorithm 2.1 the current iterate is x k and we are looking
for a direction d k satisfying condition (2). We use a truncated-Newton
approach to compute this direction. To solve the Newtonian system we call
Algorithm 4.1 (described below) with A r 2 f(x k
The following algorithm applies to the problem
s:
The initial approximation to the solution of (14) is s The algorithm
nds a point s which is a solution or satises q(s ) < q(s 0 ). Perhaps, the -
nal point is on the boundary of the region dened by ksk and l s u.
Algorithm 4.1: Conjugate gradients
The parameters << 1 and k max 2 IN are given. The algorithm starts with
Step 1. Test stopping criteria
set s
Step 2. Compute conjugate gradient direction
Step 2.1 If
else compute
Step 2.2 If (p T
Step 3. Compute step
Step 3.1 Compute
ug.
Step 3.2 Compute
Step 3.3 If (
If (
If (
Step 4. Compute new iterate
Step 4.1 Compute
Step 4.2 If (b T s k+1 > kbkks k+1
set s
Step 4.3 If (
set s
Step 5. Compute
set to Step 1.
This algorithm is a modication of the one presented in [27] (p. 529)
for symmetric positive denite matrices A and without constraints. The
modications are the following:
At Step 2.2 we test if p k is a descent direction at s k , i.e., if hp k ; rq(s k )i <
To force this condition we multiply p k by 1 if necessary. If the
matrix-vector products are computed exactly, this safeguard is not
necessary. However, in many cases the matrix-vector product Ap k is
replaced by a nite-dierence approximation. For this reason, we perform
the test in order to guarantee that the quadratic decreases along
the direction p k .
At Step 3.3 we test if p T
inequality holds, the step k
in the direction p k is computed as the minimum among the conjugate-gradient
step and the maximum positive step preserving feasibility. If
and we are at the rst iteration of CG, we set k max .
In this way CG will stop with s
the angle condition of Step 4.2. If we are not at iteration zero of CG,
we keep the current approximation to the solution of (14) obtained so
far.
At Step 4.2 we test whether the angle condition (2) is satised by
the new iterate or not. If this condition is not fullled, we stop the
algorithm with the previous iterate. We also stop the algorithm if the
boundary of the feasible set is achieved (Step 4.3).
The convergence criterion for the conjugate-gradient algorithm is dy-
namic. It varies linearly with the logarithm of the norm of the continuous
projected gradient, beginning with the value i and nishing with f . We
dene
where
a
log
log
and is used in the stopping criterion kg P (x)k 1 < of Algorithm 3.1.
The parameter k max is the maximum number of CG-iterations for each
call of the conjugate-gradient algorithm. It also varies dynamically in such
a way that more iterations are allowed at the end of the process than at the
beginning. The reason is that we want to invest a larger eort in solving
quadratic subproblems when we are close to the solution than when we are
far from it. In fact,
where
log
In the Incremental-quotient version of GENCAN, r 2 f(x k ) is not computed
and the matrix-vector products r 2 f(x k )y are approximated by
with In fact, only the components correspondent
to free variables are computed and the existence of xed variables
is conveniently exploited in (15).
5 Numerical experiments with the CUTE collec-
tion
In order to assess the reliability GENCAN, we tested this method against
some well-known alternative algorithms using all the non-quadratic bound-
constrained problems with more than 50 variables from the CUTE [12] col-
lection. The algorithms that we used for comparing GENCAN are BOX-
QUACAN [26] (see, also, [28]), LANCELOT [11, 12] and the Spectral Projected
Gradient method (SPG) (described as SPG2 in [7]; see also [8]). All
the methods used the convergence criterion kg P
stopping criteria were inhibited.
In GENCAN we used
Algorithm 3.1), and
Algorithm 4.1). In all algorithms we used . The
parameters of (the line search of) Algorithm 3.2 were the default parameters
mentioned in [7] and the same used in (the line search of) Algorithm 2.1,
i.e.,
In LANCELOT we used exact second derivatives and we did not use
preconditioning in the conjugate-gradient method. The reason for this is
that, in GENCAN, the conjugate gradient method for computing directions
is also used without preconditioning. The other options of LANCELOT
were the default ones. A small number of modications were made in
BOX-QUACAN to provide a fair comparison. These modications were:
(i) the initial trust-region radius of GENCAN was adopted; (ii) the maximum
number of conjugate-gradient iterations was xed in
the accuracy for solving the quadratic subproblems was dynamic in BOX-
QUACAN varying from 0:1 to 10 5 , as done in GENCAN, (iv) the minimum
trust-region radius min was xed in 10 3 to be equal to the corresponding
parameter in GENCAN.
The codes are written in Fortran 77. The tests were done using an ultra-SPARC
from SUN, with 4 processors at 167 MHz, 1280 mega bytes of main
memory, and Solaris 2.5.1 operating system. The compiler was WorkShop
Compilers 4.2 Oct 1996 FORTRAN 77 4.2. Finally we have used the
ag -O4 to optimize the code.
In the rst four tables we report the full performance of LANCELOT,
SPG, BOX-QUACAN, GENCAN (true Hessian) and GENCAN (Incremental-
quotients). The usual denition of iteration in LANCELOT involves only
one function evaluation. However, in order to unify the comparison we call
\iteration" to the whole process that computes a new iterate with lower functional
value, starting from the current one. Therefore, a single LANCELOT-
iteration involves one gradient evaluation but, perhaps, several functional
evaluations. At each iteration several trust-region problems are solved approximately
and each of them uses a number of CG-iterations. Problems
HADAMALS and SCON1LS have bounds where the lower limit is equal to
the upper limit. BOX-QUACAN does not run under this circumstances,
so the performance of this method in that situation is not reported in the
corresponding table. In these tables, we report, for each method:
IT: Number of iterations;
FE: Functional evaluations;
GE: Gradient evaluations;
CG: Conjugate gradient iterations, except in the case of SPG, where CG
iterations are not computed;
Time: CPU time in seconds;
nal functional value obtained;
of the projected gradient at the nal point.
The next 3 tables repeat the information of the rst ones in a more
compact and readable form. In Table 5 we report the nal functional value
obtained for each method, in the cases where there was at least one dierence
between them, when computed with four signicant digits.
In
Table
7 we report, for each method, the numbers FE and (GE+CG).
The idea is that a CG iteration is sometimes as costly as a gradient-evaluation.
The cost is certainly the same when we use the incremental-quotient version
of GENCAN. Roughly speaking, GE+CG represents the amount of work
used for solving subproblems and FE represents the work done on the true
problem trying to reduce the objective function.
Table
8 reports the computer times for the problems where at least one
of the methods used more than 1 second. The computer time used by
LANCELOT must be considered under the warning made in [9] page 136,
\LANCELOT [.] does not require an interface using the CUTE tools. It
is worth noting that LANCELOT exploits much more structure than that
Problem n IT FE GE CG Time f(x) kg P (x)k1
6.467D 06
QRTQUAD 120 144 178 145 570 1.39 3.625D+06 3.505D 06
CHEBYQAD 50 22 28 23 463 2.22 5.386D 7.229D 06
LINVERSE 1999 22 28 23 2049 47.22 6.810D+02 3.003D 06
Table
1: Performance of LANCELOT.
provided by the interface tools". As a consequence, although GENCAN used
less iterations, less functional evaluations, less gradient evaluations and less
conjugate-gradient iterations than LANCELOT in SCON1LS, its computer
time is greater than the one spent by LANCELOT. In some problems, like
QR3DLS and CHEBYQAD, the way in which LANCELOT takes advantage
of the SIF structure is also impressive.
Now we include an additional table that was motivated by the observation
of Table 7. It can be observed that the number of functional evaluations
per iteration is larger in GENCAN than in LANCELOT and BOX-
QUACAN. The possible reasons are three:
Many SPG-iterations with, perhaps, many functional evaluations per
iteration.
Many TN-iterations with backtracking.
Many TN-iterations with extrapolations.
We classify the iterations with extrapolation in successful and unsuccessful
ones. A successful extrapolation is an iteration where the extrapolation
produced a functional value smaller than the one corresponding to the rst
Problem n IT FE GE Time f(x) kg P (x)k1
7.896D 06
QRTQUAD 120 598 1025 599 0.20 3.624D+06 8.049D 06
HADAMALS 1024
CHEBYQAD 50 841 1340 842 33.75 5.386D 9.549D 06
NONSCOMP 10000 43 44 44 2.81 3.419D 10 7.191D 06
Table
2: Performance of SPG.
trial point. An unsuccessful extrapolation corresponds to a failure in the
rst attempt to \double" the steplength. Therefore, in an unsuccessful ex-
trapolation, an additional \unnecessary" functional evaluation is done and
the \next iterate" corresponds to the rst trial point. According to this, we
report, in Table 9, the following features of GENCAN (incremental-quotient
SPG-IT: SPG iterations, used for leaving the current face.
SPG-FE: functional evaluations in SPG-iterations.
TN-IT: TN iterations.
TN-FE: functional evaluations in TN-iterations.
TN-(Step 1)-IT: TN-iterations where the unitary step was accepted.
TN-(Step 1)-FE: functional evaluations in TN-iterations where the
unitary step was accepted. This is necessarily equal to TN-(Step 1)-
IT.
TN-(Backtracking)-IT: TN-iterations where backtracking was necessary
Problem n IT FE GE CG Time f(x) kg P (x)k1
5.742D 06
EXPQUAD 120
28 5.23 9.133D+03 6.388D 07
QRTQUAD 120 22 28 23 214 0.10 3.625D+06 5.706D 07
CHEBYQAD 50 52 66 53 960 45.70 5.387D 9.535D 06
6.559D 06
Table
3: Performance of BOX-QUACAN.
TN-(Backtracking)-FE: functional evaluations at iterations with back-tracking
TN-(Extrap(+))-IT: successful iterations with extrapolation.
TN-(Extrap(+))-FE: functional evaluations at successful iterations with
extrapolation.
TN-(Extrap( ))-IT: unsuccessful iterations with extrapolation.
TN-(Extrap( ))-FE: functional evaluations at unsuccessful iterations
with extrapolation. This number is necessarily equal to twice the
corresponding number of iterations.
Problem n IT FE GE CG Time f(x) kg P (x)k1
EXPQUAD 120
3.813D 06
NONSCOMP 10000 17 43 19
9.053D 06
Table
4: Performance of GENCAN (true-hessian version).
Problem n IT FE GE CG Time f(x) kg P (x)k1
EXPLIN 120 17 43 19
EXPQUAD 120 21 51 23 53 0.03 3.626D+06 2.236D 06
CHEBYQAD 50 31 43 2.929D 06
NONSCOMP 10000
Table
5: Performance of GENCAN (incremental-quotient version).
BDEXP 1.969D 2.744D 1.967D
QRTQUAD 3.625D+06 3.624D+06 3.625D+06 3.625D+06 3.625D+06
CHEBYQAD 5.386D 5.386D 5.387D 5.386D 5.386D
DECONVB 6.393D 09 4.826D 08 5.664D 6.043D
QR3DLS 2.245D 08 1.973D 05 1.450D
SCON1LS 5.981D 04 1.224D+00 | 1.269D 4.549D 04
Table
Final functional values.
Problem LANCELOT SPG BOX-QUACAN GENCAN-QUOT
FE GE+CG FE GE FE GE+CG FE GE+CG
43 58
43
EXPQUAD
QRTQUAD 178 715 1025 599 28 236 75 101
CHEBYQAD 28 486 1340 842 66 1012 43 918
28 2072 1853 1023 19 415 34 87
SCON1LS 9340 5750468 7673022 5000002 | | 8565 4995260
Table
7: Functional and equivalent-gradient (GE+CG) evaluations.
MCCORMCK 4.24 2.27 5.23 4.57 3.56
HADAMALS 4.40 1.63 | 1.80 1.19
CHEBYQAD 2.22 33.75 45.70 13.86 22.26
QR3DLS 439.31 2203.97 2286.09 976.13 523.50
Table
8: Computer time.
Type of GENCAN iterations Details of Truncated Newton iterations
SPG Iteration TN iterations Step=1 Backtracking Extrap(+) Extrap( )
Problem IT FE IT FE IT FE IT FE IT FE IT FE
Table
9: GENCAN features.
Observing Table 9 we realise that:
1. The number of SPG-iterations is surprisingly small. Therefore, only
in few iterations the mechanism to \leave the face" is activated. So,
in most iterations, the number of active constraints remains the same
or is increased. Clearly, SPG-iterations are not responsible for the
relatively high number of functional evaluations.
2. The number of iterations where backtracking was necessary is, also,
surprisingly small. Therefore, extrapolations are responsible for the
functional-evaluations phenomenon. Since an unsuccessful extrapolation
uses only one additional (unnecessary) functional evaluations, its
contribution to increasing FE is also moderate. In fact, unsuccessful
extrapolations are responsible for 116 functional evaluations considering
all the problems, this means less than 8 evaluations per problem.
It turns out that many functional evaluations are used in successful
extrapolations. Considering the overall performance of the method
this seems to be a really good feature. An extreme case is BDEXP,
where only one TN-iteration was performed, giving a successful extrapolation
that used 11 functional evaluations and gave the solution
of the problem.
Further remarks
Convergence was obtained for all the problems with all the methods
tested, with the exception of SPG that did not solve SCON1LS after more
than thirty hours of computer time. The method that, in most cases, obtained
the lowest functional values was GENCAN-QUOT, but the dier-
ences do not seem to be large enough to reveal a clear tendency.
As was already mentioned in [7], the behavior of SPG is surprisingly
good. Although it is the only method that fails to solve a problem in reasonable
time, its behavior in the problems where it works is quite e-cient.
This indicates the existence of families of problems where SPG is, probably,
the best possible alternative. This observation has already been made in [8].
BOX-QUACAN has been the less successful method in this set of ex-
periments. This is not surprising, since the authors of [16] had observed
that this method outperformed LANCELOT in quadratic problems but is
not so good when the function is far from being quadratic. In fact, it was
this observation what motivated the present work. Nevertheless, there is
still a large scope for improvements of BOX-QUACAN, as far as we take
into account that improvements in the solution of quadratic subproblems are
possible and that sophisticated strategies for updating trust-region radius
can be incorporated.
6 Experiments with very large problems
We wish to place q circles of radius r in the rectangle [0; d 1 in such
a way that for all qg, the intersection
between the circle i and the circle j is at most one point. Therefore, given
I i the goal is to determine c
solving the problem:
Minimize
subject to
r c i
r c i
The points c are the centers of the desired circles. If the objective
function value at the solution of this minimization problem is zero, then the
original problem is solved.
When I the problem above is
known as the Cylinder Packing problem [22]. The present generalization is
directed to Sociometry applications.
Table
describes the main features of some medium and large scale
problems of this type. In the problems 9-15 the sets I i were randomly
generated with the Schrage's random number generator [35] and seed = 1.
In all cases we used 0:5. Observe that n, the number of variables, is
equal to 2 q.
Tables
11 and 12 show the performances of GENCAN and LANCELOT.
Internal limitations of the big-problems installation of CUTE forbid the
solution of larger instances of this problems using SIF. We show the CPU
Problem n #I i box
Table
10: Medium- and large-scale classical and modied cylinder packing
problems.
times of GENCAN both using SIF (SIF-Time) and Fortran subroutines (FS-
for computing function and gradient. We used a random initial point
(generated inside the box with the Schrage's algorithm and seed equal to 1).
Both methods found a global solution in all the cases. In Table 12 we also
report the number of free variables at the solution so far found by GENCAN.
GENCAN
using Fortran subroutines using SIF LANCELOT
IT FE GE CG Time IT FE GE CG Time IT FE GE CG Time
Table
11: GENCAN and LANCELOT with cylinder packing problems.
9
28 10649.79 9914682
Table
12: GENCAN with very large problems.
7 Final remarks
Numerical algorithms must be analyzed not only from the point of view of its
present state but also from considerations related to their possibility of im-
provement. The chances of improvement of active-set methods like the one
presented in this paper come from the development of new unconstrained algorithms
and from the adaptation of known unconstrained algorithms to the
specic characteristics of our problem. In our algorithm, the computation
of the search direction is open to many possibilities. As we mentioned in
the introduction, a secant multipoint scheme (with a dierent procedure for
leaving the faces) was considered in [10] and a negative-curvature Newtonian
direction for small problems was used in [6], where leaving faces is also associated
to SPG. A particularly interesting alternative is the preconditioned
spectral projected gradient method introduced in [30].
The extension of the technique of this paper to general linearly constrained
optimization is another interesting subject of possible research.
From the theoretical point of view, the extension is straightforward, and the
convergence proofs do not oer technical di-culties. The only real di-culty
is that we need to project onto the feasible set, both in the extrapolation
steps and in the SPG iterations. In theory, extrapolation can be avoided
without aecting global convergence, but projections are essential in SPG
iterations. Sometimes, the feasible polytope is such that projections are
easy to compute. See [8]. In those cases, the extension of GENCAN would
probably be quite e-cient.
Acknowledgements
.
The authors are very grateful to Nick Gould, who helped them in the
use of the SIF language. We are also indebted to two anonymous referees
whose comments helped us a lot to improve the nal version of the paper.
--R
On the resolution of the generalized nonlinear complementarity problem.
A limited memory algorithm for bound constrained minimization.
Restricted opti- mization: a clue to a fast and accurate implementation of the Common Re ection Surface stack method
CUTE: constrained and unconstrained testing environment.
Global convergence of a class of trust region algorithms for optimization with simple bounds.
A globally convergent augmented Lagrangean algorithm for optimization with general constraints and simple bounds
Numerical methods for unconstrained optimization and nonlinear equations.
Comparing the numerical performance of two trust-region algorithms for large-scale bound-constrained minimization
Optimising the palletisation of cylinders in cases
Matrix Computations.
Preconditioned spectral gradient method for unconstrained optimization problems
On the Barzilai and Borwein choice of steplength for the gradient method
The Barzilai and Borwein gradient method for the large scale unconstrained minimization problem.
A more portable Fortran random number generator.
A class of inde
--TR
Global convergence of a class of trust region algorithms for optimization with simple bounds
A globally convergent augmented Lagrangian algorithm for optimization with general constraints and simple bounds
A limited memory algorithm for bound constrained optimization
Matrix computations (3rd ed.)
Gradient Method with Retards and Generalizations
Estimation of the optical constants and the thickness of thin films using unconstrained optimization
A More Portable Fortran Random Number Generator
Trust-region methods
Validation of an Augmented Lagrangian Algorithm with a Gauss-Newton Hessian Approximation Using a Set of Hard-Spheres Problems
Duality-based domain decomposition with natural coarse-space for variational inequalities0
Algorithm 813
On the Resolution of the Generalized Nonlinear Complementarity Problem
A Class of Indefinite Dogleg Path Methods for Unconstrained Minimization
The Barzilai and Borwein Gradient Method for the Large Scale Unconstrained Minimization Problem
Nonmonotone Spectral Projected Gradient Methods on Convex Sets
Newton''s Method for Large Bound-Constrained Optimization Problems
Constrained Quadratic Programming with Proportioning and Projections
Augmented Lagrangians with Adaptive Precision Control for Quadratic Programming with Equality Constraints
--CTR
G. Birgin , J. M. Martnez, Structured minimal-memory inexact quasi-Newton method and secant preconditioners for augmented Lagrangian optimization, Computational Optimization and Applications, v.39 n.1, p.1-16, January 2008
G. Birgin , R. A. Castillo , J. M. Martnez, Numerical Comparison of Augmented Lagrangian Algorithms for Nonconvex Problems, Computational Optimization and Applications, v.31 n.1, p.31-55, May 2005
G. Birgin , J. M. Martnez , F. H. Nishihara , D. P. Ronconi, Orthogonal packing of rectangular items within arbitrary convex regions by nonlinear optimization, Computers and Operations Research, v.33 n.12, p.3535-3548, December 2006 | active-set strategies;numerical methods;box-constrained minimization;Spectral Projected Gradient |
606904 | Smoothing Methods for Linear Programs with a More Flexible Update of the Smoothing Parameter. | We consider a smoothing-type method for the solution of linear programs. Its main idea is to reformulate the corresponding central path conditions as a nonlinear system of equations, to which a variant of Newton's method is applied. The method is shown to be globally and locally quadratically convergent under suitable assumptions. In contrast to a number of recently proposed smoothing-type methods, the current work allows a more flexible updating of the smoothing parameter. Furthermore, compared with previous smoothing-type methods, the current implementation of the new method gives significantly better numerical results on the netlib test suite. | Introduction
Consider the linear program
x s.t.
are the given data and A is assumed to be of full rank,
m. The classical method for the solution of this minimization problem is
Dantzig's simplex algorithm, see, e.g., [11, 1]. During the last two decades, however, interior-point
methods have become quite popular and are now viewed as being serious alternatives
to the simplex method, especially for large-scale problems.
More recently, so-called smoothing-type methods have also been investigated for the
solution of linear programs. These smoothing-type methods join some of the properties of
interior-point methods. To explain this in more detail, consider the optimality conditions
(2)
of the linear program (1), and recall that (1) has a solution if and only if (2) has a solution.
The most successful interior-point methods try to solve the optimality conditions (2) by
solving (inexactly) a sequence of perturbed problems (also called the central path conditions)
where > 0 denotes a suitable parameter. Typically, interior-point methods apply some
kind of Newton method to the equations within these perturbed optimality conditions and
guarantee the positivity of the primal and dual variables by an appropriate line search.
Many smoothing-type methods follow a similar pattern: They also try to solve (inexactly)
a sequence of perturbed problems (3). To this end, however, they rst reformulate the
system (3) as a nonlinear system of equations and then apply Newton's method to this
reformulated system. In this way, smoothing-type methods avoid the explicit inequality
constraints, and therefore the iterates generated by these methods do not necessarily belong
to the positive orthant. More details on smoothing methods are given in Section 2.
The algorithm to be presented in this manuscript belongs to the class of smoothing-type
methods. It is closely related to some methods recently proposed by Burke and Xu [2, 3]
and further investigated by the authors in [13, 14]. In contrast to these methods, however,
we allow a more
exible choice for the parameter . Since the precise way this parameter
is updated within the algorithm has an enormous in
uence on the entire behaviour of the
algorithm, we feel that this is a highly important topic. The second motivation for writing
this paper is the fact that our current code gives signicantly better numerical results than
previous implementations of smoothing-type methods. For some further background on
smoothing-type methods, the interested reader is referred to [4, 6, 7, 8, 16, 17, 20, 22, 23]
and references therein.
The paper is organized as follows: We develop our algorithm in Section 2, give a detailed
statement and show that it is well-dened. Section 3 then discusses the global and local
convergence properties of our algorithm. In particular, it will be shown that the method has
the same nice global convergence properties as the method suggested by Burke and Xu [3].
Section 4 indicates that the method works quite well on the whole netlib test suite. We then
close this paper with some nal remarks in Section 5.
A few words about our notation: R n denotes the n-dimensional real vector space. For
we use the subscript x i in order to indicate the ith component of x, whereas a
superscript like in x k is used to indicate that this is the kth iterate of a sequence fx k g R n .
Quite often, we will consider a triple of the form
and s 2 R n ; of course, w is then a vector in R n+m+n . In order to simplify our notation,
however, we will usually write instead of using the mathematically more correct
is a vector whose components are all nonnegative, we
simply write x 0; an expression like x 0 has a similar meaning. Finally, the symbol k k
is used for the Euclidean vector norm.
2 Description of Algorithm
In this section, we want to derive our predictor-corrector smoothing method for the solution
of the optimality conditions (2). Furthermore, we will see that the method is well-dened.
Since the main idea of our method is based on a suitable reformulation of the optimality
conditions (2), we begin with a very simple way to reformulate this system. To this end, let
denote the so-called minimum function
and let dened by
Since ' has the property that
a 0; b 0; ab
it follows that can be used in order to get a characterization of the complementarity
conditions:
Consequently, a vector w a solution of the optimality
conditions (2) if and only if it satises the nonlinear system of equations
The main disadvantage of the mapping is that it is not dierentiable everywhere. In order
to overcome this nonsmoothness, several researchers (see, e.g., [7, 5, 18, 21]) have proposed
to approximate the minimum function ' by a continuously dierentiable mapping with the
help of a so-called smoothing parameter . In particular, the function
has become quite popular and is typically called the Chen-Harker-Kanzow-Smale smoothing
function in the literature [5, 18, 21]. Based on this function, we may dene the mappings
and
(w) :=
Obviously, is a smooth approximation of for every > 0, and coincides with in the
limiting case Furthermore, it was observed in [18] that a vector w
solves the nonlinear system of equations
if and only if this vector is a solution of the central path conditions (3). Solving the system (4)
by, say, Newton's method, is therefore closely related to several primal-dual path-following
methods which have become quite popular during the last 15 years, cf. [24].
However, due to our numerical experience [13, 14] and motivated by some stronger theoretical
results obtained by Burke and Xu [3], we prefer to view as an independent variable
(rather than a parameter). To make this clear in our notation, we write
and, similarly,
from now on. Since the nonlinear system (4) contains only equations and
we add one more equation and
dene a mapping
cf. [3]. We also need the following generalization of the function :
here, 2 (0; 1] denotes a suitable centering parameter, and : [0; 1) ! R is a function
having the following properties:
(P.1) is continuously dierentiable with
For each 0 > 0, there is a constant
(possibly depending on 0 ) such that
The following functions satisfy all these properties:
In fact, it is quite easy to see that all three examples satisfy properties (P.1), (P.2), and
(P.3). Furthermore, the mapping
being independent
of 0 . Also the mapping () :=
being independent
of 0 . On the other hand, a simple calculation shows that the third example does satisfy
with
depends on 0 .
Note that the choice corresponds to the one used in [2, 3], whereas here we aim to
generalize the approach from [2, 3] in order to allow a more
exible procedure to decrease .
Since the precise reduction of has a signicant in
uence on the overall performance of
our smoothing-type method, we feel that such a generalization is very important from a
computational point of view.
Before we give a precise statement of our algorithm, let us add some further comments
on the properties of the function : (P.1) is obviously needed since we want to apply a
Newton-type method to the system of equations hence has to be
su-ciently smooth. The second property (P.2) implies that is strictly monotonically
increasing. Together with 0 from property (P.1), this means that the nonlinear
system of equations
is equivalent to the optimality conditions (2) themselves (and not to the central path conditions
(3)) since the last row immediately gives The third property (P.3) will be used in
order to show that the algorithm to be presented below is well-dened, cf. the proof of Lemma
2.2 (c). Furthermore, properties (P.3) and (P.4) together will guarantee that the sequence
is monotonically decreasing and converges to zero, see the proof of Theorem 3.3.
We now return to the description of the algorithm. The method to be presented below is a
predictor-corrector algorithm with the predictor step being responsible for the local fast rate
of convergence, and with the corrector step guaranteeing global convergence. More precisely,
the predictor step consists of one Newton iteration applied to the system (x; ; s;
followed by a suitable update of which tries to reduce as much as possible. The corrector
step then applies one Newton iteration to the system but with the usual
right-hand side being replaced by centering parameter
1). This Newton step is followed by an Armijo-type line search.
The computation of all iterates is carried out in such a way that they belong to the
neighbourhood
of the central path, where > 0 denotes a suitable constant. In addition, we will see later
that all iterates automatically satisfy the inequality (x; s; ) 0, which will be important
in order to establish a result regarding the boundedness of the iterates, cf. Lemma 3.1 and
Proposition 3.2 below.
The precise statement of our algorithm is as follows (recall that and ; denote the
mappings from (5) and (6), respectively).
Algorithm 2.1 (Predictor-Corrector Smoothing Method)
Choose w 0 :=
and select k(x
0, and set k := 0.
(Termination Criterion)
If
Compute a solution (w of the
linear system
then set
else compute is the nonnegative integer such that
(Corrector Step)
Choose
R n R m R n R of the linear system
such that
and go to Step (S.1).
Algorithm 2.1 is closely related to some other methods recently investigated by dierent
authors. For example, if we take then the above algorithm is almost identical
with a method proposed by Burke and Xu [3]. It is not completely identical since we use a
dierent update for ^
w k in the predictor step, namely for the case ' This is necessary
in order to prove our global convergence results, Theorem 3.3 and Corollary 3.4 below. On
the other hand, Algorithm 2.1 is similar to a method used by the authors in [14]; in fact,
taking once again almost have the method from [14]. The only dierence that
remains is that we use a dierent right-hand side in the predictor step, namely (w k ; k ),
whereas [14] uses (w k ; 0). The latter choice seems to give slightly better local properties,
however, the current version allows to prove better global convergence properties.
From now on, we always assume that the termination parameter " in Algorithm 2.1 is
equal to zero and that Algorithm 2.1 generates an innite sequence
we assume that the stopping criteria in Steps (S.1) and (S.2) are never satised. This is
not at all restrictive since otherwise w k or w k would be a solution of the optimality
conditions (2).
We rst note that Algorithm 2.1 is well-dened.
Lemma 2.2 The following statements hold for any k 2 N:
(a) The linear systems (7) and (8) have a unique solution.
(b) There is a unique k satisfying the conditions in Step (S.2).
(c) The stepsize ^
t k in (S.3) is uniquely dened.
Consequently, Algorithm 2.1 is well-dened.
Proof. Taking into account the structure of the Jacobians 0 (w; ) and 0
using
the fact that 0 () > 0 by property (P.2), part (a) is an immediate consequence of, e.g., [12,
Proposition 3.1]. The second statement follows from [13, Proposition 3.2] and is essentially
due to Burke and Xu [3]. In order to verify the third statement, assume there is an iteration
index k such that
for all ' 2 N . Since k(^x
we obtain from property (P.3) that
Taking this inequality into account, the proof can now be completed by using a standard
argument for the Armijo line search rule. 2
We next state some simple properties of Algorithm 2.1 to which we will refer a couple of
times in our subsequent analysis.
Lemma 2.3 The sequences fw k generated by Algorithm 2.1 have
the following properties:
(a) A T
(b) k 0 (1
denotes the
constant from property (P.4).
(c)
Proof. Part (a) holds for our choice of the starting point Hence it
holds for all k 2 N since Newton's method solves linear systems exactly. In order to verify
statement (b), we rst note that we get
from the fourth block row of the linear equation (8). Since
it therefore follows from property (P.4) and the updating rules in steps (S.2) and (S.3) of
Algorithm 2.1 that
Using a simple induction argument, we see that (b) holds. Finally, statement (c) is a direct
consequence of the updating rules in Algorithm 2.1. 2
3 Convergence Properties
In this section, we analyze the global and local convergence properties of Algorithm 2.1.
Since the analysis for the local rate of convergence is essentially the same as in [3] (recall
that our predictor step is identically to the one from [3]), we focus on the global properties. In
particular, we will show that all iterates remain bounded under a strict feasibility
assumption. This was noted by Burke and Xu [3] for a particular member of our class of
methods (namely for the choice () := ), but is not true for many other smoothing-type
methods like those from [5, 6, 7, 8, 13, 14, 22, 23].
The central observation which allows us to prove the boundedness of the iterates
is that they automatically satisfy the inequality
for all k 2 N provided this inequality holds for This is precisely the statement of our
rst result.
Lemma 3.1 The sequences fw k
generated
by Algorithm 2.1 have the following properties:
(a) (^x
(b)
Proof. We rst derive some useful inequalities, and then verify the two statements simultaneously
by induction on k.
We begin with some preliminary discussions regarding statement (a). To this end, let
be xed for the moment, and assume that we take ^
in Step (S.2) of
Algorithm 2.1. Since each component of the function is concave, we then obtain
From the third block row of (7), we have
Hence we get from (11):
We claim that the right-hand side of (12) is nonpositive. To prove this statement, we rst
note that
with
@
@
0:
Hence it remains to show that
However, this is obvious since the last row of the linear system (7) implies
We next derive some useful inequalities regarding statement (b). To this end, we still
assume that k 2 N is xed. Using once again the fact that is a concave function in each
component, we obtain from (8)
and this completes our preliminary discussions.
We now verify statements (a) and (b) by induction on k. For
0 by our choice of the starting point w and the initial smoothing parameter
in Step (S.0) of Algorithm 2.1. Therefore, if we set ^
in Step (S.2) of
Algorithm 2.1, we also have ^
On the other hand, if we
in Step (S.2), the argument used in the beginning of this proof shows
that the inequality (^x
holds in this case.
Suppose that we have
immediately implies that we have Consequently, if we have
in Step (S.2) of Algorithm 2.1, we obviously have (^x
erwise, i.e., if we set ^
in Step (S.2), the argument used in the beginning
part of this proof shows that the same inequality holds. This completes the formal proof by
induction. 2
We next show that the sequence fw k g generated by Algorithm 2.1 remains bounded provided
that there is a strictly feasible point for the optimality conditions (2), i.e., a vector ^
x
Proposition 3.2 Assume that there is a strictly feasible point for the optimality conditions
(2). Then the sequence fw k generated by Algorithm 2.1 is bounded.
Proof. The statement is essentially due to Burke and Xu [3], and we include a proof here
only for the sake of completeness.
Assume that the sequence fw k generated by Algorithm 2.1 is un-
bounded. Since f k g is monotonically decreasing by Lemma 2.3 (b), it follows from Lemma
2.3 (c) that
for all k 2 N . The denition of the (smoothed) minimum function therefore implies that there
is no index ng such that x k
i !1 on a subsequence, since otherwise
we would have '(x k
in turn, would imply k(x on a
subsequence in contrast to (14). Therefore, all components of the two sequences fx k g and
are bounded from below, i.e.,
and s k
where
R denotes a suitable (possibly negative) constant.
On the other hand, the sequence fw k unbounded by assumption.
This implies that there is at least one component ng such that x k
on a subsequence since otherwise the two sequences fx k g and fs k g would be
bounded which, in turn, would imply the boundedness of the sequence f k g as well because
we have A T 2.3 (a)) and because A is assumed to have
full rank.
be a strictly feasible point for (2) whose existence
is guaranteed by our assumption. Then, in particular, we have
Since we also have
for all k 2 N by Lemma 2.3 (a), we get
A
by subtracting these equations. Premultiplying the rst equation in (16) with (^x x k ) T and
taking into account the second equation in (16) gives
Reordering this equation, we obtain
for all k 2 N . Using (15) as well as ^
in view of the strict feasibility of the
it follows from (17) and the fact that x k
on a
subsequence for at least one index ng that
Hence there exists a component ng (independent of k) such that
on a suitable subsequence.
using Lemma 3.1 (b), we have
for all k 2 N . Taking into account the denition of and looking at the j-th component,
this implies
for all k 2 N . Using (18) and (15), we see that we necessarily have x k
those k belonging to the subsequence for which (18) holds. Therefore, taking the square in
(19), we obtain
after some simplications. However, since the right-hand side of this expression is bounded
by 4 2
0 , this gives a contradiction to (18). 2
We next prove a global convergence result for Algorithm 2.1. Note that this result is dierent
from the one provided by Burke and Xu [3] and is more in the spirit of those from [22, 13,
14]. (Burke and Xu [3] use a stronger assumption in order to prove a global linear rate of
convergence for the sequence f k g.)
Theorem 3.3 Assume that the sequence fw k generated by Algorithm 2.1
has at least one accumulation point. Then f k g converges to zero.
Proof. Since the sequence f k g is monotonically decreasing (by Lemma 2.3 (b)) and bounded
from below by zero, it converges to a number 0. If 0, we are done.
So assume that > 0. Then the updating rules in Step (S.2) of Algorithm 2.1 immediately
give
for all k 2 N su-ciently large. Subsequencing if necessary, we assume without loss of
generality that (20) holds for all k 2 N . Then Lemma 2.3 (b) and ^
Y
Y
by assumption, it follows from (21) that lim Therefore, the
stepsize
does not satisfy the line search criterion (9) for all k 2 N large enough.
Hence we have
for all these k 2 N .
Now let w be an accumulation point of the sequence fw k g, and let fw k gK
be a subsequence converging to w . Since , we can assume without
loss of generality that the subsequence f^ k g K converges to some number ^
Furthermore, since > 0, it follows from (20) and Lemma 2.2 (a) that the corresponding
subsequence
converges to a vector
is the unique solution of the linear equation
cf. (8). Using f^ k g K ! 0 and taking the limit k !1 on the subset K, we then obtain from
(20) and (22) that
On the other hand, we get from (22), (10), property (P.3), (20), Lemma 2.3 (c), and
that
for all k 2 N su-ciently large. Using (20), this implies
is a continuously dierentiable function at due to (24), taking the
limit k !1 for k 2 K then gives
^x
^s
denotes the solution of the linear system (23).
Using (23) then gives
a contradiction to (24). Hence we cannot
have > 0. 2
Due to Proposition 3.2, the assumed existence of an accumulation point in Theorem 3.3 is
automatically satised if there is a strictly feasible point for the optimality conditions (2).
An immediate consequence of Theorem 3.3 is the following result.
Corollary 3.4 Every accumulation point of a sequence fw k generated by
Algorithm 2.1 is a solution of the optimality conditions (2).
Proof. The short proof is essentially the same as in [14], for example, and we include it
here for the sake of completeness. | Let w be an accumulation point of the
sequence fw k K denote a subsequence converging to w . Then
we have k ! 0 in view of Theorem 3.3. Hence Lemma 2.3 (c) implies
i.e., we have x 0; s 0 and x
due to the denition of .
Lemma 2.3 (a) also shows that we have A T we see that
indeed a solution of the optimality conditions (2). 2
We nally state our local rate of convergence result. Since our predictor step coincides with
the one by Burke and Xu [3], the proof of this result is essentially the same as in [3], and we
therefore omit the details here.
Theorem 3.5 Let the parameter satisfy the inequality > 2
n, assume that the optimality
conditions (2) have a unique solution w suppose that the sequence
generated by Algorithm 2.1 converges to w . Then f k g converges
globally linearly and locally quadratically to zero.
The central observation in order to prove Theorem 3.5 is that the sequence of Jacobian
matrices 0 (w k ; k ) converges to a nonsingular matrix under the assumption of Theorem 3.5.
In fact, as noted in [3, 12], the convergence of this sequence to a nonsingular Jacobian matrix
is equivalent to the unique solvability of the optimality conditions (2).
We implemented Algorithm 2.1 in C. In order to simplify the work, we took the PCx code
from [10, 9] and modied it in an appropriate way. PCx is a predictor-corrector interior-point
solver for linear programs, written in C and calling a FORTRAN subroutine in order to solve
certain linear systems using the sparse Cholesky method by Ng and Peyton [19]. Since the
linear systems occuring in Algorithm 2.1 have essentially the same structure as those arising
in interior-point methods, it was possible to use the numerical linear algebra part from PCx
for our implementation of Algorithm 2.1. We also apply the preprocessor from PCx before
starting our method.
The initial point w is the same as the one used for our numerical experiments
in [14] and was constructed in the following way:
(a) Solve AA T using a sparse Cholesky code in order to compute y 0
(b)
(c) Solve AA T using a sparse Cholesky code to compute 0
Note that this starting point is feasible in the sense that it satises the linear equations
b. Furthermore, the initial smoothing parameter was set to
i.e., 0 is equal to the initial residual of the optimality conditions (2) (recall that the starting
vector satises the linear equations in (2) exactly, at least up to numerical inaccuracies). In
order to guarantee that however, we sometimes have to enlarge the value
of 0 so that it satises the inequalities
ng with x 0
Note that the same was done in [14]. We also took the stopping criterion from [14], i.e., we
terminate the iteration if one of the following conditions hold:
(a)
Finally, the centering parameter ^
k was chosen as follows: We let ^
0:1, start with ^
if the predictor step was successful (i.e., if we were allowed to take ^
otherwise. This strategy guarantees that all centering parameters belong to the interval
According to our experience, a larger value of usually gives faster convergence,
but the entire behaviour of our method becomes more unstable, whereas a smaller value of
the centering parameter gives a more stable behaviour, while the overall number of iterations
increases. The dynamic choice of ^
above tries to combine these observations in a
suitable way.
The remaining parameters from Step (S.0) of Algorithm 2.1 were chosen as follows:
We rst consider the function () := (this, more or less, corresponds to the method from
All test runs were done on a SUN Enterprise 450 with 480 MHz. Table 1 contains the
corresponding results, with the columns of Table 1 having the following meanings:
problem: name of the test problem in the netlib collection,
m: number of equality constraints (after preprocessing),
n: number of variables (after preprocessing),
k: number of iterations until termination,
P: number of accepted predictor steps,
value of k at the nal iterate,
value of k(w k )k 1 at the nal iterate,
primal objective: value of the primal objective function at nal iterate.
Moreover, we give the number of iterations needed by the related method from [14] in
parantheses after the number of iterations used by our new method.
Table
1: Numerical results for Algorithm 2.1
problem objective
1.758e 04 5.50184589e+03
adlittle
aro
agg 390 477 22 (23) 17 3.8e 02 6.257e 04 3.59917673e+07
agg2 514 750 22 (25)
agg3 514 750 21 (30)
beaconfd 86 171 21 (18) 5.156e 04 3.35924858e+04
blend
3.652e 04 3.35213568e+02
4.166e 06 3.15018729e+02
bore3d 81 138 14 (28) 11 5.9e 3.980e
brandy 133 238 3.469e 04 1.51850990e+03
9.161e 04 2.69000997e+03
cycle 1420 2773
5.207e
d2q06c 2132 5728 48 (57)
d6cube 403 5443
Table
results for Algorithm 2.1
problem objective
degen2 2.901e
degen3
d
001 | | | (|) | | | |
f800 322 826 28 (36) 17 1.2e 5.876e 04 5.55679564e+05
nnis 438 935 20 (31) 17 2.0e 7.843e 04 1.72791066e+05
8.491e
t2d 7.494e 04 6.84642932e+04
9.397e
forplan 121 447 26 (28) 17 2.2e 4.722e 04 6.64218959e+02
ganges 1113 1510 20 (25) 19 2.4e 1.218e 04 1.09585736e+05
greenbea | | | (25) | | | |
greenbeb 1932 4154 43 (35) 13 1.7e 9.559e 04 4.30226026e+06
israel 174 316 17 (27) 15 1.0e 02 4.732e 04 8.96644822e+05
kb2 43 68 1.653e 06 1.74990013e+03
lot 133 346 23 (35) 12 3.2e 7.087e 04 2.52647043e+01
maros 655 1437 22 (37) 14 2.4e 1.738e 04 5.80637437e+04
8.053e 04 1.49718517e+06
3.330e 04 3.20619729e+02
nesm 654 2922 46 (52) 9 4.7e 04 4.718e 04 1.40760365e+07
perold 593 1374 26 (33) 12 2.1e 6.564e 04 9.38075527e+03
pilot 1368 4543 71 (81) 9 9.0e
pilot.ja 810 1804
pilot.we 701 2814 36 9.981e 04 2.72010753e+06
6.888e 04 2.58113924e+03
4.059e 04 4.49727619e+03
recipe 4.205e
2.793e
sc50a 8.546e
sc50b 48 76 7.714e 06 7.00000047e+01
1.049e 04 1.47534331e+07
4.563e 04 2.33138982e+06
6.230e 04 1.84167590e+04
1.834e 04 3.66602616e+04
9.098e 04 5.49012545e+04
scorpion 340 412 19 (21) 14 2.4e 04 1.815e 05 1.87812482e+03
2.169e 04 9.04293215e+02
Table
results for Algorithm 2.1
problem objective
7.203e 06 8.66666364e+00
3.131e
8.910e 06 1.41224999e+03
8.233e
1.051e
seba 448 901 19 (23) 12 2.5e 1.550e 06 1.57116000e+04
share1b 112 248 29 (43) 14 2.2e 3.762e 04 7.65893186e+04
share2b 96 162 8.099e
shell 487 1451 19 (22)
ship04l 292 1905 22 (20) 7.616e 04 1.79332454e+06
ship04s 216 1281 1.561e 04 1.79871470e+06
ship08l 470 3121 25 (21) 15 2.1e 7.592e 04 1.90905521e+06
ship08s 276 1604 15 (20) 13 3.0e 02 7.416e 04 1.92009821e+06
ship12l 610 4171 21 (21) 13 7.0e 2.670e 04 1.47018792e+06
ship12s 340 1943 7.548e
2.548e
stair 356 532
standata 314 796
standgub 314 796
standmps 422 1192 14 (18) 12 9.6e 4.418e
stocfor2 1980 2868 14
stocfor3 15362 22228 23 (63) 19 2.8e 04 5.514e 05 3.99767839e+04
stocfor3old 15362 22228 23 (70) 19 2.8e 04 5.514e 05 3.99767839e+04
truss 1000 8806 3.621e 04 4.58815785e+05
vtp.base
Table
clearly indicates that our current implementation works much better than our
previous code from [14]. In fact, for almost all examples we were able to reduce the number
of iterations considerably.
We nally state some results for the function () := giving
another complete list, however, we illustrate the typical behaviour of this method by presenting
the corresponding results for those test examples why lie between kb2 and scagr7
(this list includes the di-cult pilot* problems) in Table 2.
Table
2: Numerical results with quadratic function
problem objective
kb2 43 68 15 9 2.0e 2.458e
lot 133 346 22 9 3.0e 6.715e 04 2.52647449e+01
maros 655 1437 20 11 3.0e 3.805e 04 5.80637438e+04
9.450e 04 1.49718510e+06
modszk1 665 1599 26 11 2.5e 3.087e
nesm 654 2922
perold 593 1374 55 11 5.7e 05 2.585e 04 9.38075528e+03
pilot 1368 4543 53 7 1.4e 04 2.953e 04 5.57310815e+02
pilot.we 701 2814 43 4 9.8e 04 9.283e 04 2.72010754e+06
5.672e 04 2.58113925e+03
3.573e 04 4.49727619e+03
recipe 1.928e
1.193e 04 5.22020686e+01
sc50a 3.224e
sc50b 48 76 11 9 4.1e 4.955e
9.326e
Concluding Remarks
We have presented a class of smoothing-type methods for the solution of linear programs.
This class of methods has similar convergence properties as the one by Burke and Xu [3], for
example, but allows a more
exible choice for the updating of the smoothing parameter .
The numerical results presented for our implementation of this smoothing-type method are
very encouraging and, in particular, signicantly better than for all previous implementa-
tions. The results also indicate that the precise updating of the smoothing parameter plays a
very important role for the overall behaviour of the methods. However, this subject certainly
needs to be investigated further.
--R
Introduction to Linear Programming.
A global and local superlinear continuation-smoothing method for P 0 and R 0 NCP or monotone NCP
A global linear and local quadratic noninterior continuation method for nonlinear complementarity problems based on Chen-Mangasarian smoothing functions
A class of smoothing functions for nonlinear and mixed complementarity problems.
Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities.
PCx: An interior-point code for linear programming
PCx User Guide.
Linear Programming and Extensions.
On the solution of linear programs by Jacobian smoothing methods.
Improved smoothing-type methods for the solution of linear programs
A special Newton-type optimization method
A complexity analysis of a smoothing method using CHKS-functions for monotone linear complementarity problems
Global convergence of a class of non-interior point algorithms using Chen-Harker-Kanzow-Smale functions for nonlinear complementarity problems
Some noninterior continuation methods for linear complementarity prob- lems
Block sparse Cholesky algorithm on advanced uniprocessor computers.
A new look at smoothing Newton methods for nonlinear complementarity problems and box constrained variational inequalities.
Algorithms for solving equations.
Analysis of a non-interior continuation method based on Chen-Mangasarian smoothing functions for complementarity problems
bounds and superlinear convergence analysis of some Newton-type methods in optimization
--TR
Block sparse Cholesky algorithms on advanced uniprocessor computers
A non-interior-point continuation method for linear complementarity problems
A class of smoothing functions for nonlinear and mixed complementarity problems
Some Noninterior Continuation Methods for LinearComplementarity Problems
Primal-dual interior-point methods
Global and superlinear convergence of the smoothing Newton method and its application to general box constrained variational inequalities
A Global Linear and Local Quadratic Noninterior Continuation Method for Nonlinear Complementarity Problems Based on Chen--Mangasarian Smoothing Functions
A Global and Local Superlinear Continuation-Smoothing Method for <i>P</i><sub><FONT SIZE="-1">0</sub></FONT> and <i>R</i><sub><FONT SIZE="-1">0</sub></FONT> NCP or Monotone NCP
A Complexity Analysis of a Smoothing Method Using CHKS-functions for Monotone Linear Complementarity Problems
A Complexity Bound of a Predictor-Corrector Smoothing Method Using CHKS-Functions for Monotone LCP | global convergence;linear programs;central path;quadratic convergence;smoothing method |
606910 | Streams and strings in formal proofs. | Streams are acyclic directed subgraphs of the logical flow graph of a proof representing bundles of paths with the same origin and the same end. The notion of stream is used to describe the evolution of proofs during cut-elimination in purely algebraic terms. The algebraic and combinatorial properties of flow graphs emerging from our analysis serve to elucidate logical phenomena. However, the full logical significance of the combinatorics, e.g. the absence of certain patterns within flow graphs, remains unclear. | Introduction
The analytical method which divides proofs into blocks, analyses them separately
and puts them together again, proved its failure: by "cutting up" it destroys
what it seeks to understand, that is the dynamics within proofs [CS97]. This
important point has been understood and emphazised by J-Y. Girard who, in
1987, introduced proof nets to study proofs as global entities and to study the way
that formulas interact in a proof through logical connectives. In 1991, another
notion of graph associated to proofs has been introduced by S. Buss [Bus91] for
different purposes (namely, as a tool to show the undecidability of k-provability)
and has been employed in [Car97, Car96, Car97a] to study dynamics in proofs.
This graph, called logical flow graph, traces the flow of occurrences of formulas
in a proof.
The combinatorics and the complexity of the evolution of logical flow graphs
of proofs under cut-elimination are particularly complicated and intruiging. An
overview can be found in [CS97] and a combinatorial analysis is developed in
[Car97b]. These difficulties constitute the main reason for looking at simpler
but well-defined subgraphs of logical flow graph and try to study their properties
and behavior in proofs.
We shall concentrate on streams (defined in Section 3). A stream represents
a bundle of paths traversing occurrences of the same atomic formula in a proof
and having the same origin and the same target. A proof is usually constituted
by several streams. They interact with each other because of logical rules and
share common paths because of contractions. There are cases where a bundle
of paths needs to be exponentially large in size like in the propositional cut-free
proofs of the pigeon-hole principle for instance (this is a consequence of [Hak85]
and a formal argument is found in [Car97c]), and the study of streams becomes
relevant for the study of complexity of sequent calculus proofs.
Our interest lies on the topological properties of streams. We shall be concerned
only with a rough description of logical paths in a stream. This description
will be based on axioms, cuts and contraction rules occurring in the
proof. Rules introducing logical connectives will not play any role. This simplified
treatment of logical paths allows for a description of proofs as strings
(Section 6), and for a natural algebraic manipulation of proofs (Sections 7 and
8). When logical flow graphs contain cycles, this description leads to precise relations
between proofs and finitely presented groups [Car98]. Here, we will only
look at proofs whose logical flow graph is acyclic and we shall develop a theory
which relates algebraic strings to streams. We prove that any stream can be
described by an algebraic string, and that for any string there is always a proof
with a logical flow graph which is a stream described by the string (Section 6).
Usually, several strings can describe the same stream. We shall characterise the
most compact and the most explicit ones (Sections 6).
In Section 8 we show that the transformation of streams during the procedure
of cut-elimination (Section 2.2) can be simulated by a finite set of rewriting
rules (Theorem 26). The notion of stream, though simple, is shown to
be very powerful at the computational level: Example 27 illustrates how, at
times, a purely algebraic manipulation of streams can completely describe the
proof transformation. Theorem 28 pinpoints how weak formulas in a proof influence
the complexity of streams during cut-elimination. Theorem 29 says that a
growth in complexity is either already "explicit" in a proof (i.e. a proof contains
a stream with large arithmetical value; this notion is defined in Section 7) or it
is due to purely global effects induced by local rules of transformation.
To conclude, let us mention that our algebraic analysis of proofs seems adequate
to approach the problem of the introduction of cuts in proofs, an important
topic in proof theory and automated deduction. It seems plausible that a
theory of the flow of information in a proof might lead to develop methods for
the introduction of cuts in proofs.
Basic notions and notation
In this section we briefly recall known concepts. For limitations of space, in
most cases, we shall refer the reader to the literature. Good sources are [Gir87a,
Tak87], and also [CS97].
2.1 Formal proofs
Formal proofs are described in the sequent calculus LK. This system is constituted
by axioms which are sequents of the form
any formula and \Gamma; \Delta are any collections of formulas, by logical rules for the
introduction of logical connectives and by two structural rules
Cut
Contraction
We shall extend LK with the rule
F -rule
where F is a unary predicate and is a binary function. The F -rule is added
to LK because it allows to speak more directly about computations. It was
considered already in [Car97b, CS96a, Car96, CS97a].
In our notation, a rule is always denoted by a bar. The sequent(s) above
the bar is called antecedent of the rule and the sequent below the bar is called
consequent.
In the following we will frequently use the notion of occurrence of a formula in
a proof as compared to the formula itself which may occur many times. Notions
as positive and negative occurrence of a formula in a sequent are defined in
In an axiom the two formulas A are called distinguished formulas
and the formulas in \Gamma; \Delta are called side formulas. A formula A in \Pi which
has been introduced as a side formula in some axiom is called
A formal proof is a binary tree of sequents, where each occurrence of a
sequent in a proof can be used at most once as premise of a rule. The root
of the tree is labelled by the theorem, its leaves are labelled by axioms and its
internal nodes are labelled by sequents derived from one or two sequents (which
label the antecedents of the node in the tree) through the rules of LK and the
F -rule.
The height of a rule R in a proof \Pi is the distance between the consequent
of R and the root of the proof-tree describing \Pi.
At times we shall consider proofs \Pi which are reduced in the sense of [Car97b],
i.e. there are no superfluous redundancies in the proof which have been built
with the help of weak occurrences. More formally, no binary rule or contraction
rule is applied to a weak formula, no unary logical rule is applied to two weak
formulas and no occurrence in cut-formulas is weak. In [Car97b] it is shown that
given any proof, we can always find a reduced proof of the same end-sequent
which has a number of lines and symbols bounded by the ones of the original
proof.
2.2 Cut-elimination
In 1934 Gentzen proved the following result
Any proof in LK can be effectively transformed into a proof which
never uses the cut-rule. This works for both propositional and predicate
logic.
The statement holds for the extension of LK with the F -rule as well.
This is a fundamental result in proof theory and in [CS97] the reader can
find a presentation of its motivations and consequences. The computational
aspects of the theorem have been largely investigated but we are still far from an
understanding of the dynamical process which can occur within proofs [Car97,
Car96, Car97a]. After the elimination of cuts, the resulting proof may have
to be much larger than the proof with cuts. For propositional proofs, this
expansion might be exponential and for proofs with quantifiers, it can be super-
exponential, i.e. an exponential tower of 2's [Tse68, Ore79, Sta73, Sta78, Sta79].
We will not enter into the details of the steps of transformation of the procedure
of cut-elimination. The reader which is unfamiliar with them can refer
to [Gir87a] or [Tak87] (and also to [CS97] or to [Car97b]).
2.3 Logical flow graphs
As described in [Car97], one can associate to a given proof a logical flow graph
by tracing the flow of atomic occurrences in it. (The notion of logical flow graph
was first introduced by Buss in [Bus91] and a similar notion is due to Girard
and appeared in [Gir87]. Here we restrict Buss' notion to atomic formulas.)
We will not give the formal definition but we will illustrate the idea with an
example. Consider the two formal proofs below formalized in the language of
propositional logic and the sequences of edges that one can trace through them
cut
contraction
contraction
Each step of deduction manipulates formulas following a logically justified
rule, and precise links between the formulas involved in the logical step are
traced (the arrows indicated in the figures above represent some of these links).
Formulas in a proof correspond to nodes in the graph and logical links induced
by rules and axioms correspond to edges. As a side effect different occurrences
of a formula in a proof might be logically linked even if their position in the
proof is apparently very far apart. Between any two logically linked occurrences
there is a path. The graph that we obtain is in general disconnected and each
connected component corresponds to a different atomic formula in the proof.
The structure of the proof on the left is interesting because shows that paths
in a proof can get together through contraction of formulas, and the structure
on the right shows that cyclic paths might be formed.
The orientation on the edges of a logical flow graph is induced by natural
considerations on the validity of the rules of inference which we shall not discuss
here (see [Car97]). In the following we will not really exploit the direction of
the paths. We will use directions only to establish that a path starts and ends
somewhere. We might speak of a path going up or down, and of an edge being
horizontal, in case the edge appears in an axiom or between cut-formulas. These
latter will be called axiom-edges and cut-edges, respectively.
In the sequel, we call bridge any maximal oriented path that starts from a
negative occurrence, ends in a positive occurrence and does not traverse cut-
edges. The maximality condition implies that both the starting and ending
occurrences of a bridge should lie either in a cut-formula or in the end-sequent
of the proof.
A node of a logical flow graph is called a branching point if it has exactly
three edges attached to it. In a proof, branching points correspond to formulas
obtained by contraction or by a F :rule. We say that a node is a focussing
branching point if there are two edges oriented towards it. A node is called
defocussing branching point if the two edges are oriented away from it. A node
is called input vertex if there are no edges in the graph which are oriented
towards it. A vertex is called output vertex if there are no edges in the graph
which are oriented away from it. Input and output nodes are called extremal
points. In a proof, extremal points correspond to weak occurrences of formulas
and to occurrences of formulas in the end-sequent.
By a focal pair we mean an ordered pair (u; w) of vertices in the logical flow
graph for which there is a pair of distinct paths from u to w. We also require
that these paths arrive at w along different edges flowing into w.
Two logical flow graphs have the same topological structure if they can both
be reduced to the same graph by collapsing each edge between pairs of points
of degree at most 2 to a vertex.
The notions of bridge, focal pairs, topological structure, focussing and defo-
cussing point, input and output node have been introduced in [CS97a, Car97b]
where the reader can find properties and intuition.
A stream is an acyclic directed graph with one input vertex v and one output
vertex w. The pair (v; w) is called base of the stream. All other vertices in the
stream are not extremal.
If G is a directed graph, then a stream in G is a subgraph of G which is
a stream. A full stream in G based on (v; w) is a stream in G such that all
directed paths lying in G between v and w belong to the stream.
A substream of a stream is a subgraph of the stream that is based on a pair
(v; w), where v; w are nodes of the stream. A substream might have the same
base of the stream, but this is not required.
A stream of a proof \Pi is a stream in the logical flow graph of \Pi such that the
input and output vertices occur in the end-sequent of \Pi. A stream of a proof \Pi
is based on the pair of formulas (A; B), where A is a positive occurrence and B
is a negative occurrence.
In the simplest case, a stream of a proof is a bridge, but usually, the stream
of a cut-free proof will look (after stretching it) roughly as follows
input vertex
output vertex
where the bifurcation points correspond to the presence of contractions or F -
rules in a proof, the circles correspond to axiom-edges through which each path
has to pass, and the paths are oriented from left to right. In this way all
bifurcation points on the left hand side of the axiom-edges will correspond to
contractions on negative occurrences and the bifurcation points on the right will
correspond to contractions on positive occurrences or applications of F -rules.
If the proof contains cuts, a stream might be much more complicated. For
instance it might contain arbitrarily long chains of focal pairs as illustrated by
the following figure
output vertex
input vertex
where each horizontal edge (linking a focussing point to a defocussing point)
corresponds to a cut in the proof [Car97b]. It might also contain cyclic paths,
but we will not consider this situation here. The reader interested in cycles in
proofs can refer to [Car97c, Car97b, Car97a, Car96] for their combinatorics and
complexity.
Usually, a stream lying in an acyclic logical flow graph associated to a proof
and containing cuts, looks roughly as follows
where we see a chain of shapes similar to the one associated to cut-free proofs
but where single bridges are now allowed to take themselves this shape. As
usual, axioms lie along paths between the point where defocussing points end
and focussing points start to appear.
The following property of streams illustrates their regularity
Proposition 1 The number of focussing points in a stream is the same as the
number of its defocussing points.
Proof. The claim follows from a simple fact. Let P be an acyclic directed
graph which is connected, contains one input vertex, n focussing points and m
defocussing ones. Then, P has nodes. This is easily proved by
induction on the values of n; m by noticing that a defocussing node induces the
number of distinct output nodes to increase by 1, and a focussing node induces
the number of distinct output nodes to decrease by 1. From this we conclude
that if P is a stream, then exactly one input vertex and one
output vertex.4 Interaction of streams in proofs
A logical flow graph of a proof is a union of connected components. There might
be many of them, and typically, each component corresponds to a distinguished
atomic formula in the proof. Each component is a directed graph which has
input nodes as well as output nodes. When no cycles appear in a proof, this
is easy to check. To show the assertion in the general case, one needs to show
that cyclic logical flow graphs (e.g. graphs with several nested cycles) must have
incoming edges and outcoming ones. This follows from Theorems 25 and 54 of
[Car97b].
Each component is usually constituted by several streams as illustrated by
the graph on the left
x
x
x
which contains two streams, the first has base (x; y 1 ) and the second (x; y 2 ) (as
illustrated on the right).
Connected components and streams define two different types of interaction
in a proof:
1. distinct streams can share subgraphs (as in the example above) and can
influence one another. This interaction is analysed in [Car97b] through a combinatorial
study of cut-elimination.
2. different connected components belong to the same logical flow graph because
of logical connectives. The interaction between distinguished connected
components has been studied through the notion of proof-net by Girard, Lafont,
Danos, Regnier and many others. Girard's seminal paper [Gir87] introduces the
reader to the area. We shall skip here the numerous references.
In this paper we shall not address any question concerning interaction with
the exception of Section 8.
5 Strings and stream structures
In this section we introduce a language to represent a stream as a string formalized
in a language of three symbols b; ; +, where b is a constant, is the
binary operation of concatenation, and + is the binary operation of bifurcation.
We will also need two extra symbols (; ) to be used as separators. The language
will be called S and the words in S are simply called strings.
string is a word in the language Ssatisfying one of the following
conditions
i. b is a string
ii. if w are strings then w 1 w 2 is a string
iii. if w are strings then is a string.
Example 3 The string b corresponds to a stream which looks like a sequence
of edges. The diagrams below illustrate the behavior of the operations of concatenation
and of bifurcation + on streams. Obviously the pair of streams
that we consider in the diagrams below can be substituted by any other pair.
The topology of streams changes with the application of both operations since
new branching points are generated. Through concatenation we create sequences
of focal pairs and through bifurcation we create new focal pairs.
We define a first order theory whose axioms are universally quantified equations
Definition 4 A stream structure is an algebra of terms satisfying the following
axioms
A3
Axioms us that there is no topological change that can be
achieved by concatenating a bridge to a stream. Axioms A2; A2 0 can be illustrated
in terms of streams as follows
By the point of view of the input vertex, the structure of the two graphs is
identical. Namely, the number of paths going from the input vertex to the
output vertex remains unchanged for both graphs. Axioms A3 and A4 guarantee
that the topological structure of the stream is preserved by commutativity and
associativity of +. In fact, we shall be interested in streams up to isomorphism.
Definition 5 A stream structure is called associative when it satisfies
(associativity of )
and it is called commutative when it satisfies
One can think of A1-A6 as universally quantified axioms over variables w,
stream structure is constituted by an abelian additive semi-group
and by a multiplicative part. The distributivity law holds. Notice also that
the operation of concatenation is not commutative in general and that in an
associative stream structure we do not distinguish substrings of the form (w 1
Since the operation of bifurcation is associative, in the following we will drop
parenthesis when not necessary. For instance, the word (w 1 will be
We will also use the shorthand notation w n instead of
times
and we will call n the multiplicity of w.
Proposition 6 The following properties are satisfied in any stream structure
1. w wn
2. wn wn
3. permutation -
Proof. To check the three properties it is a routine. Note that they correspond
to axioms A2; A3. They are derived from their corresponding axiom
with the help of A4.In a stream structure there are infinitely many non-equivalent strings. Namely,
for any two positive integers n; m, we have b n Therefore a stream structure
contains at least a countable number of non-equivalent terms.
Proposition 7 (Normalization) Let w be a string. For every stream struc-
ture, there is a unique integer k - 1 such that w is equivalent to b k in the stream
structure.
Proof. Let the height h(w) of a string w be defined as follows:
By induction on the height of the string w we show that there is a k such
that w is equivalent to b k .
If w is b then
If w is of the form w 1 w 2 then by induction hypothesis, w 1 is equivalent to
is equivalent to b l . Therefore w 1 w 2 is equivalent to b m b l . On the
other hand, by property 2 in Proposition 6 and axiom A1 one derives
times
times
If w is of the form w 1 +w 2 then by induction hypothesis w are equivalent
to is equivalent to b m l and hence to
b m+l by axiom A6.
To show uniqueness, let us interpret the concatenation as the operation
of multiplication on natural numbers, the bifurcation + as addition and the
constant b as the number 1. In this way, the model of natural numbers becomes
a natural model for the stream structure. In particular, given a string w there
is exactly one value k such that w is equivalent to b k . If this was not the case,
then the natural numbers could not be a model for the stream structure.Proposition 8 (Cancellation modulo torsion) The following properties are
satisfied in any stream structure
1. b w
2. w 1
3. w w
4. w 1
Proof. Properties are derived using axioms A1; To derive property
3 we apply Proposition 7 to the string w and observe that there is a positive
integer n such that w is equivalent to b n . Hence w w
. By right distributivity (i.e. axiom
(where the right hand side contains n strings of the form b w 1 ) and analogously
2 . Similarly, one shows property 4. In this case axioms A1 0 and
should be used instead.6 Strings and streams of proofs
To associate a string to a stream in a proof, we think of the logical flow graph
of the proof as being embedded in the plane, we read a bridge as the constant
b, we read a cut-edge as performing the operation of concatenation , and we
express the nesting of bridges through the operation of bifurcation +.
Definition 9 A decomposition of an acyclic directed graph P is a set of streams
lying on P such that
1. each directed path in P belongs to exactly one of the streams P j ,
2. the input node of P j is an input node of P ,
3. the output node of P j is an output node of P .
If P is a graph lying in the logical flow graph of a proof \Pi, then the symbol
denotes the restriction of P to the logical flow graph of a subproof \Pi 0 of
\Pi.
\Pi be a proof whose last rule is applied to the subproofs
. (If the rule is unary, then consider \Pi 1 only.) A stream P is called
an extension of P is a stream of \Pi and fP is a decomposition
of
In Definition 10, if the last rule of \Pi is a cut, then the number of streams
n can be arbitrarily large. In fact, a stream P might pass through the cut-
formulas, back and forth, several times.
Definition 11 Let P be a stream of a proof \Pi. A string associated to a stream
P is built by induction on the height of the subproofs \Pi 0 of \Pi in such a way that
the following conditions are satisfied:
1. if \Pi 0 does not contain cuts, let fP be a decomposition of P - \Pi 0 .
Each string associated to P i is b m , where m is the number of distinguished
directed paths lying in P i ,
2. if the last rule of \Pi 0 is not a cut and it is applied to \Pi 1 ; \Pi 2 (if the rule
is unary, then consider \Pi be the decomposition of
. Then the decomposition of P - \Pi 0 is fP 0
l g where
(a)
i is an extension of P j and the string associated to P 0
w j is the string associated to P j , or
(b)
i is a stream obtained by the union of streams P 0
which
are extensions of P
and are based on the
same pair (v string associated to P 0
where w jr is the string associated to P jr , for
3. if the last rule of \Pi 0 is a cut applied to subproofs \Pi
is a decomposition of P - \Pi 1 and P - \Pi 2 , then the decomposition of P - \Pi 0
is fP 0
l g, where the P 0
i 's are all possible extensions of the streams
in such that
(a)
i is an extension of P and the string associated to P 0
i is
(possibly is the string associated to
(b)
i is a stream obtained by the union of streams P 0
where
each
jr is an extension of P jr and all the
jr are based on the same pair (v The string associated to P 0
is w 0
jr is w jr ;1 ;s is the string
associated to P jr ;s for
Let us now give a couple of examples to illustrate how streams of proofs can
be read as strings.
Example 12 Consider the stream of a proof
c
c
where a path starts on the left hand side of the picture, goes up until it reaches
a branching point. Two distinguished paths depart from this branching point,
they pass through two axioms in the proof and they rejoin into another branching
point to pass through a cut-edge, form a new bridge, pass through a second
cut-edge, go up into a new subproof, split once more, join once more and
end-up into the right hand side of the picture. The structure of this proof
can be described with different strings. For instance, the strings (b 2 b) b 2 ,
(b b) (b b) b are
descriptions of the above structure. One can easily check that the three strings
are equivalent in a stream structure. In fact, the first and the second string are
equivalent by A2 0 and the second is equivalent to the third by A2.
Example 13 Consider two streams of proofs having the following form
where the height of a cut in the proof is reflected by the position of horizontal
edges in the graph. We read the stream on the left as (b b) b and the stream
on the right as b (b b). The parenthesis denote the height of a cut in a proof.
In this way the string (b b) b represents a cut between a subproof containing
a stream b b and a second subproof containing a bridge. The representation of
b (b b) is symmetric.
string associated to a stream in a proof \Pi is compact if it is
determined as described in Definition 11, where we require that the streams P i
lying in decompositions fP relative to subproofs \Pi 0 of \Pi have distinct
bases (v
Given a stream of a proof, there is a unique compact string associated to it
(up to commutativity of +). This is because, for all subproofs \Pi 0 of \Pi, there is
only one stream P 0 (up to commutativity of +) which is defined as P - \Pi 0 on a
In Example 12, the string (b 2 b) b 2 is compact. Compact strings are
a succinct way to represents streams. All other representations have larger
complexity, that is a larger number of symbols. Consider, for instance, a string
of the form w 1 describing a stream P of a proof based on (v; w).
If each w i describes a simple path in P based on (v; w), then w i is of the form
b. In this case we say that the string w 1
the stream P explicitly: all paths are described one by one. This description
is the most expensive in terms of the number of symbols and we refer to it as
explicit representation.
Proposition 15 Let \Pi be a cut-free proof. All the streams of \Pi are described
by strings of the form b n , where n is bounded by the number of axiom-edges of
\Pi.
Proof. The claim follows from Definition 11 (assertion 1) and the following
observation. Each path belonging to a stream passes through an axiom-edge
by definition. Suppose that more than one path belonging to the same stream
passes through the same axiom. These paths will pass through the pair of
distinguished formulas of the axiom but through distinct atomic occurrences.
Therefore there should be a moment along the proof, where the occurrences
need to identify (since a stream has one input vertex and one output vertex).
But the identification is impossible because of the subformula property which
holds for cut-free proofs.Remark 16 The cut-free proof of F (2)
which can be constructed
from axioms of the form F (t) ! F (t), the F -rule, and contractions on the left,
is an example of proof where the number n in Proposition 15 corresponds exactly
to the number of axioms in the proof. The logical flow graph of this proof is
a stream based on
)). Since distinguished formulas in axioms are
atomic and all of them are linked to the end-sequent, then the number of axioms
in the proof must coincide with the number of paths of the stream.
Remark 17 In proofs we can only compose and bifurcate bridges having the
same orientation. This justifies the fact that stream structures are not defined
to have a group on their additive part but simply an additive semi-group.
6.1 From strings to streams
We show that any string is associated to some stream of a proof.
Theorem For each string there is a proof with a stream described by the
string.
Proof. Let w be a string. We shall build a proof \Pi w whose end-sequent is
whose logical flow graph is a stream
associated to w. The construction is done by induction on the complexity of
the substrings.
If w is b then \Pi w is an axiom of the form F
If w is w 1 w 2 then by induction we know \Pi w1 and \Pi w2 . The end-sequents
of \Pi w1 and \Pi w2 are F
and g(x). By substituting the occurrences of the variable x in \Pi w2 with the term
f(x) we obtain a proof \Pi 0
w2 with end-sequent F (f(x)) ! F (g(f(x))), the same
logical structure as \Pi w2 and the same logical graph. (This is straightforward to
check.) Then, we combine with a cut on the formula F (f(x)) the proofs \Pi w1
and \Pi 0
w2 and obtain a proof of the sequent F (x) ! F (g(f(x))) whose associated
string is w 1 w 2 (by Definition 11).
If w is then by induction we know \Pi w1 and \Pi w2 . Their end-
sequents are of the form F
f(x) and g(x). We apply the F -rule to \Pi w1 and \Pi w2 to obtain a proof of
applying a contraction to the occurrences
F (x) on the left, we obtain the sequent F (x) ! F (f(x) g(x)) and a proof with
associated string (by Definition 11).Remark 19 The proof \Pi constructed in Theorem is formalized in the extension
of the propositional sequent calculus with F -rules. Notice that there are no
occurrences in \Pi and that cuts are only on atomic formulas. In particular,
the proof \Pi is reduced. One might be unhappy with the presence of F -rules in
\Pi and might like to look for proofs in pure propositional logic. To find proofs
containing a required stream is not difficult once we allow an arbitrary use of
occurrences in \Pi. To find a propositional proof \Pi which is reduced is, on
the other hand, a very difficult task and it is not at all clear whether there is a
uniform algorithm that given a stream, returns a proof which contains it.
Remark 20 Theorem associates a stream to a given string. The proof
containing such a stream is not unique. Take for instance the following transformation
of streams due to the procedure of cut-elimination (the existence of
such transformations is proved in [Car97b])
where the contraction (on the proof on the left) lying above w is applied much
after the cut rule. The streams, before and after cut-elimination, are described
by the same string b b also that this is the only
possible string describing the above structures. (The fact that the height of the
contraction above w is smaller than the height of the cut rule, plays a crucial
role here.)
6.2 Strings and topology of streams
Usually, proofs having streams with the same topology (i.e. the same display
of branching points), might have different strings associated to them. Take
for instance the following transformation of streams due to the procedure of
cut-elimination [Car97b]
where the proof on the left can be described by the strings b b 2 and (b b) 2 ,
but the proof on the right can only be described by (b b) 2 . Also, consider the
following pair of streams
where the proof on the left is described by the strings b b 2 and (b b) 2 , and
the one on the right by b (b
We say that a stream P in \Pi is minimal if for any subproof \Pi 0 of \Pi whose
end-sequent is combined with some cut-rule, the graph P - \Pi 0 does not contain
simple bridges as connected components.
Proposition be two minimal streams. If G 1 and G 2 have the
same topological structure then they are described by the same explicit strings.
Proof. Let G 1 be a stream for (v; w) and G 2 be a stream of (x; y). By
definition an explicit string for a stream is a bifurcation of strings which are
concatenations of b's and describe simple paths in the stream. Since G 1
have the same topological structure, the number of paths between v; w and
y must be the same, say n. In particular, the two streams are reduced by
hypothesis and therefore they have the same number of cut-edges lying along
each path. This is enough to conclude that if w is an explicit string
of G 1 then wn must also be a string of G 2 . Moreover, this string is
unique up to permutation of the components of the bifurcation operator.7 Arithmetical value of strings and complexity
If a proof contains cuts, then the compact descriptions for its streams might be
much shorter than the explicit ones. Let us illustrate this point with a concrete
example where the presence of a chain of focal pairs in a stream is described by
a compact string of size n, and by an explicit string of size 2 n .
Example 22 We look at a proof of F (2)
(This example is taken
from [Car97b].) There is no use of quantifiers and the formalization takes place
on the propositional part of predicate logic. Our basic building block is given
by
which can be proved for each j in only a few steps. (One starts with two copies
of the axiom F
combines them with the F -rule to get
Then one applies a contraction to the two occurrences of F
) on the left and
derives the sequent.) We can then combine a sequence of these proofs together
using cuts to get a proof of F (2)
in O(n) steps.
The logical flow graph for the proof of F (2)
looks roughly as
where the notation \Pi refers to the proofs of F (2)
). The
logical flow graph of each \Pi j contains two branches, one for the contraction of
two occurrences of F
) on the left, and another for the use of the F -rule on
the right. Along the graph we notice a chain of n pairs of branches which gives
rise to an exponential number of paths starting at F (2) and ending in F (2 2 n
There are no cycles in the proof and the logical flow graph of this proof is a
stream. The compact string associated to it is b 2 b 2 where each of the
corresponds to a focal pair in the graph. The explicit string is b 2 n
Can we detect a chain of focal pairs lying in a stream by reading its associated
string? To answer let us introduce some more notation. A string can always be
seen as a concatenation of strings either of the form b or w wn ,
wn are strings and n ? 1. For instance, take the string w of
the form (b (b 2 +b b 3 We say that are concatenated to
each other and we call them factors of w. A factor of the form w 1
is called non-trivial. The number of non-trivial factors of w is the index of w.
Proposition 23 Let P be a stream based on (v; w) and lying in the logical flow
graph of a proof \Pi. Let w be the string representing P . If w contains a substring
of index n then there is a chain of n focal pairs in the logical flow graph of \Pi.
Proof. Let P be a stream and w be the string associated to it. By Definition
11, any substring w 0 of w describes a stream lying in some subproof \Pi 0 of
\Pi. If w 0 has index n then w 0 is of the form w 1
are non-trivial factors, for
the substrings w correspond to streams lying in subproofs \Pi i linked
through cuts, and based on pairs
(in \Pi 0 the occurrences A are linked by a cut-edge). In particular, the
substrings
are of the form
By Definition 11, w
are strings associated to streams based
on the same pair (A; B). Therefore, there is at least a focal pair lying in the
subproof
(because
? 1). This means that in \Pi 0 we have a chain of focal
pairs which is defined by the cut-edges connecting the subproofs \Pi
and the focal pairs in the \Pi i j
's.As illustrated in Example 22, a chain of n focal pairs lying in a stream gives
rise to at least 2 n distinct paths. In Proposition 24, we show that the number
of paths in a stream can be computed precisely by means of an arithmetical
interpretation of strings. We say that the arithmetical value t(w) associated to
a string w is defined as follows: t(b) is 1, t(w 1
is
Proposition 24 Let w be a string associated to a stream P . Then the number
of directed paths from the input vertex to the output vertex of P is t(w).
Proof. This follows in a straightforward way from the interpretation between
streams and strings described in Example 3.Proposition 25 Let w be a string and w 0 be a substring of w. Then any substitution
of w 0 with a string w 00 where t(w string w such that
Proof. The arithmetical term t(w) (once considered in its syntactical form)
contains the arithmetical subterm t(w 0 ). If we substitute the occurrence of t(w 0 )
with t(w 00 ) we shall obtain the arithmetical term t(w ) whose value is t(w) since
cut-elimination
How does a stream of a proof evolve through the procedure of cut elimination?
A more general version of this question was addressed in [Car97, Car97b] where
the combinatorial operation of "duplication" on directed graphs was introduced
and used to analyse the combinatorics of the transformations induced by cut-
elimination. Here we would like to show that the evolution of streams can be
analysed through simple algebraic manipulations. We give a number of rewriting
rules and show that these rules describe the transformation.
The set of rewriting rules that we want to consider contains the computational
rules
wn wn
wn wn w
wn \Gamma! w permutation -
which follow from the axioms A1-A3 and A5. Axiom A4 does not have a counter-part
here because from now on we shall consider only compact strings associated
to proofs. It also contains the local structural rule
which represents the possibility to duplicate the same substrings, and the global
structural rules
which cancel some of the substrings. It is clear that local and global structural
rules allow a string to grow and shrink. Theorem 26 shows how the process of
cut-elimination induces streams to shrink and grow. Notice that if w is a string
transformed by R6 into w 0 then t(w) ! t(w 0 ). If w is transformed in w 0 by R7 i ,
On the other hand, if any of the rules R1-R5
are used then
Before stating Theorem 26, we need to introduce some more definitions. A
reduction is a sequence of applications of rewriting rules that transforms a string
w into a string w 0 . An application of a rewriting rule s ! t to w replaces an
occurrence of the substring s in w with the substring t. A reduction of a string
is called final if it leads to a string of the form b n , for some n.
We say that a path in the logical flow graph of a proof is disrupted by a step
of cut-elimination when given two nodes of the path, after the transformation
there is no more path between them. A stream is disrupted when one of its
paths is disrupted. This notion was introduced in [Car97] where the reader can
find examples.
Theorem 26 Let \Pi be a proof and let w be the compact string associated to
some stream of \Pi. For any process of elimination of cuts which gives a cut-free
proof with n axioms, either there is a reduction of w to a string b m (where
through the rules R1-R7, or the stream is disrupted by some step of
elimination of cuts either on weak occurrences or on contractions.
Proof. The proof consists of checking that at each stage of the procedure
of cut-elimination, the deformation of streams in the proof is regulated by the
set of rules R1-R7. Namely, if w is a compact string associated to a stream
P in \Pi, and if \Pi 0 is the proof obtained by transforming \Pi through a step of
elimination of cuts, then there is a compact string associated to a stream in \Pi 0
which is obtained from w after rewriting one or several of its substrings with the
rules R1-R7. Notice that several substrings of w can be simultaneously affected
because several paths belonging to the same stream might be involved in the
transformation of the same cut.
We shall consider first the behavior of a stream P which passes through the
cut-formulas that are simplified by the step of the procedure.
Let us start by considering the elimination of a cut when one of the cut-
formulas is a distinguished occurrence in an axiom. There might be several
paths (of the stream P ) that pass through the distinguished occurrences and
each of these paths will be denoted b (in the string w) because of compactness.
This is shown by an easy chasing of Definition 11. If the axiom appears on the
left, then we use R1 to replace substrings b w 0 with w 0 in w. If the axiom
appears on the right, then allows to replace substrings of the form w 0 b
with w 0 in w. Clearly the string that we obtain is compact.
If a cut is applied to formulas with non-trivial logical complexity, i.e. formulas
which are not atomic, then a directed path belonging to the stream might
pass through the same cut-formula several times and different portions of the
same path might behave differently. In particular, several directed paths belonging
to the stream might pass through the same cut-formulas. Their behavior
will be captured by a simultaneous applications of rules R1-R7 to substrings of
w which describe different portions of the stream involved in the transformation.
We shall start to consider the case where a cut is applied to two formulas
which are main formulas of two logical rules. Several situations might arise. Let
us suppose without loss of generality that the formulas are of the form A - B.
First, suppose that there is a substream lying in the stream that passes
through both A and B and that it is described by a substring (wA wAB ) wB ,
where wA describes a substream passing through A in the cut-formula on the
right, wAB describes a substream passing through both the A and the B in the
cut-formula on the left, and wB describes a substream passing through B in
the cut-formula on the right. After cut-elimination, the substream is described
either by the string (wA wAB ) wB or by the string wA (wAB wB ), depending
on the position of the cuts on A and B. In the first case no rewriting rule is
applied and the cut on A preceeds the cut on B; in the second case R5 is applied
and the cut on B preceeds the cut on A.
Second, suppose that there are paths of the stream that pass through exactly
one of the disjuncts. In this case the paths will be simply stretched but no change
in their description will take place.
These are the only two possible situations that might occur. Of course,
a path might pass through A and B several times, or wAB might describe a
stream passing through A and B on the cut-formula on the right, or the initial
substream might be described by wA (wAB wB ). It is easy to imagine all
combinations. The main point is that the treatment specified above adapts
easily to all other variants. In particular, rule R5 0 might be used instead of R5.
If a cut is applied to a formula A obtained from a contraction on two occurrences
then the procedure of cut-elimination yields a duplication of
a subproof and this creates quite intriguing situations.
We start to handle the simplest case. (This case is illustrated in Remark 20.)
Suppose that there is a substream lying in the stream that passes through the
cut-formula which lies on the proof that will be duplicated by the procedure.
Suppose also that the extremes of the substream do not both lie in the cut-
formula. Then, the stream has to pass through the sequent resulting from the
application of the cut rule because its extremes lie in the end-sequent. Let w 3 be
the substring describing the substream. Let w be the substreams passing
through the contraction formulas A 1 ; A 2 such that w 3 (w 1 +w 2 ) is a substring
describing the topology of this portion of the stream. After duplication of the
subproof the substring will be transformed into the substring (w 3 w 1 )+(w 3 w 2 )
and this is done by applying rules R2;
Suppose now that the substream above (i.e. the substream lying in the
stream that passes through the cut-formula which lies on the proof that will be
duplicated by the procedure) is such that both its extreme points belong to the
cut-formula. This case is the most intriguing. Let w 3 be the substring describing
the substream. After passing through the cut-edges, the stream will go up to
the contraction formulas A 1 ; A 2 and it will depart into four paths: two coming
from its input vertex, say w and two going towards its output vertex, say
It might be that not all the four paths belong to the stream and because
of this we shall handle different cases. We illustrate the transformation of the
portion of the stream in the following picture
input vertex
output vertex
input vertex
output vertex
where we think of the streams as being stretched.
If all four paths belong to the stream, the transformation is described by
rule R7 1 , where the paths w 1 w 3 w 2 and w 4 w 3 w 5 are lost. If w
belong to the stream, the transformation is described by R7 2 or R7 3 . The
cases where any three other paths are considered, is handled similarly. If w
or belong to the stream, then the substring is unaltered. If the paths
belong to the stream, then the stream will be disrupted and
the statement holds. This concludes the treatement of the contraction rule. (To
be precise, since the operation of bifurcation + is commutative, we might need
to use R3 to rearrange the order of the substrings of the form w
handling properly the contraction case.)
If two cuts are permuted, suppose that the stream passes through both pairs
of cut-formulas, say C be subproofs of \Pi to which a
cut on C applied and let \Pi 2 be the subproof whose last rule of inference is
a cut on D 1 ; D 2 . By Definition 11 a stream P 2 in \Pi 2 passing through
might be described either by a compact string of the form w 1 wn , or by
a compact string of the form (w 1;1
w 0 be a substring associated to a stream P 1 lying in \Pi 1 , passing through C 1
and connected by a cut-edge to P 2 . In the first case, w will contain a substring
of the form w wn ); then, we apply
(w 2 wn ). In the second case, w will contain a substring of the form
apply R5 and obtain
(w
If are contained in \Pi 1 , then rule has to be applied instead of
R2. If two cuts are permuted but the stream passes through only one of the
cut-formulas then no rule is applied.
If a cut is pushed upwards then it might happen that a contraction is pushed
below the height of the cut. This might imply that a branching point of the
stream (maybe several of them) will be pushed below a cut-edge and therefore
that the compact description of the stream might change. The new compact
description is obtained with the application of rules
In all cases treated above, the stream P passed through the cut-formulas
simplified by the step of the procedure. We shall consider now the case where
the stream does not pass through the pair of cut-formulas.
This is the case when one of the cut-formulas is weak. (Remember that the
extremes of a stream can only lie on the end-sequent.) Then the procedure of
cut-elimination will induce a disruption of the structure of the proof due to the
removal of a subproof. In case the stream passes through the subproof which is
removed, then the stream will be disrupted and the statement holds.
In case a stream passes through the side formulas of the antecedents of
a cut but not through their cut-formulas, its paths will be stretched and no
modification of the substrings is needed. The only exception is the contraction
case, where a substream passing through the side formulas of the subproof to
duplicate, might be duplicated while the subproof is duplicated. In this case,
rule R6 is used.
Let us notice that all along the proof one needs to verify that the string
associated to the proof \Pi 0 is compact. This is a straightforward verification and
we leave it to the reader.
To conclude, if the stream has not been disrupted then from Proposition 15
it follows that w has been reduced to a string b m where m is smaller than the
number of axioms in \Pi.Example 27 Consider the proof of the sequent F (2)
given in Example
22 and described by the compact string b 2 b 2 where the
terms b 2 are exactly n. We can calculate the exponential expansion of this proof
after cut-elimination through a purely algebraic manipulation of strings as we
shall show for namely for the string b 2 b 2 b 2 b 2 . For an arbitrary
n the approach is similar. By applying R4 to the substrings b 2 b 2 we get
(b apply R3 to all substrings b 2 b to get
(b b (b b and by R1 we
or in short b 4 b 4 . By applying
again R4; R3 and R1 we get
(b 4 (b 4 b
(b b
(b b
(b b
(b b
Note that b 16 is the minimum expected value for a cut-free proof computing
) from F (2). In fact the minimal tree of computation of F
branching points corresponding to 2
times F (2).
Theorem 28 Let \Pi be a proof of F
atomic cuts and no weak formulas. Let w i be the full compact string associated to
procedures of cut-elimination
transforming \Pi into \Pi 0 and w i into w 0
are simulated by R1-R5
and w 0
Proof. The proof \Pi has a very simple structure. Here are some properties:
1. the proof \Pi contains no logical rules and all formulas appearing in \Pi are
atomic. This is because cuts are defined on atomic formulas and formulas in
the end-sequent are atomic;
2. for any sequent in \Pi, exactly one formula lies on the right hand side of the
sequent. This follows from 1;
3. there are no contractions on the right in \Pi. This is because there are no
weak formulas and no logical rule can be applied on negative formulas in \Pi.
Properties 1-3 imply some properties of the flow graph of \Pi:
a. no path passes twice through the same cut-formula, since cut-formulas are
b. no path passes twice through the side formulas of a sequent used in a cut-
rule. This is because at any stage of the procedure, the sequents have the
and contractions are on the left only.
Therefore, if the cut-formula in the sequent is a positive occurrence of the form
are negative occurrences and no path
can start and end in them; if the cut-formula is a negative occurrence F (s j ),
for some there might be several paths passing through the side
formulas (in fact, all of them should pass through the formula on the right of
the sequent) but the proof where F occurs cannot be duplicated because
contractions can be applied only to negative formulas.
Properties a and b ensure that the rewriting rules R6 and R7 are not used
by the simulation. In particular R1-R5 are rewriting rules of the form p ! q
This implies that the final string w 0
i are of the form b n i where
\Pi be a proof of the sequent S such that the number of symbols
in S is n. If any cut-free proof \Pi 0 of S has 2 O(n) lines then
1. either there is a stream w in \Pi such that t(w) is 2 O(n) ,
2. or any process of cut-elimination from \Pi to \Pi 0 is simulated by R6.
Proof. If \Pi 0 has 2 O(n) lines then there is a stream b k in it where k is 2 O(n) .
This means that b k has been obtained from a string w in \Pi with or without the
help of R6. If R6 has not been used then the arithmetical value t(w) is at least
k since rules R1-R5 and R7 cannot augment it.Remark Rule R6 has a global effect. In fact, it does not concern the cut-
formulas involved in the step of elimination of cuts, but the structure of the
proof itself. It corresponds to the existence of a path in the proof which passes
twice through the side formulas of a subproof that is duplicated by the procedure
of cut-elimination.
Even if a proof might be such that no path passes twice through the side
formulas of a sequent applied to a cut-rule, during cut-elimination this property
might be lost. It is easy to check that permutation of cuts, contraction and
resolution of cut-formulas which are main formulas of logical rules, might produce
a proof which falsifies this property. Once the property is violated, rule
R6 might play a role in the transformation.
Remark 31 The expansion of \Pi 0 can be exponential with respect to the number
of sequents in \Pi, as in Example 22. This example shows that cuts on atomic
formulas can induce exponential complexity.
Problem To decide whether w 1 and w 2 can be reduced to the same string
k , for some k, by using rules R1-R5, can be done in polynomial time. In fact
can be polynomially reduced to some b k1 ; b k2 for some fixed k
and it is sufficient to check whether the values k 1 and k 2 are the same or not.
If we allow the rules R1-R7, does the question become NP -complete?
--R
The undecidability of k-provability
In Annals of Pure and Applied Logic
Some combinatorics behind proofs.
Cycling in proofs and feasibility.
Turning cycles into spirals.
Duplication of directed graphs and exponential blow up of proofs.
The cost of a cycle is a square.
Asymptotic cyclic expansion and bridge groups of formal proofs.
Looking from the inside and the outside.
Propositional proofs via combinatorial geometry and the search for symmetry.
Making proofs without modus ponens: An introduction to the combinatorics and complexity of cut elimination.
A graphic apology for symmetry and implicitness.
The relative efficiency of propositional proof systems.
Linear Logic.
Proof Theory and Logical Complexity.
The intractability of resolution.
Lower bounds for increasing complexity of derivations after cut elimination.
The lengths of proofs.
Structural Complexity of Proofs.
Bounds for proof-search and speed-up in predicate calculus
Lower bounds on Herbrand's Theorem.
Proof Theory.
Complexity of a derivation in the propositional calculus.
--TR
Linear logic
Graphic Apology for Symmetry and Implicitness | cut elimination;structure of proofs;proof complexity;duplication;cycles in proofs;directed graphs;logical flow graphs |
606912 | On the complexity of data disjunctions. | We study the complexity of data disjunctions in disjunctive deductive databases (DDDBs). A data disjunction is a disjunctive ground clause R(c-1pt1)...R(ck),K 2, which is derived from the database such that all atoms in the clause involve the same predicate R. We consider the complexity of deciding existence and uniqueness of a minimal data disjunction, as well as actually computing one, both for propositional (data) and nonground (program) complexity of the database. Our results extend and complement previous results on the complexity of disjunctive databases, and provide newly developed tools for the analysis of the complexity of function computation. | Introduction
During the past decades, a lot of research has been spent to overcome the limitations of conventional
relational database systems. The field of deductive databases, which has emerged from logic programming
[29], uses logic as a tool for representing and querying information from databases. Numerous
logical query languages, which extend Horn clause programming for dealing with various aspects such as
incomplete or indefinite information, have been proposed to date, cf. [1, 33].
In particular, the use of disjunction in rule heads for expressing indefinite information was proposed
in Minker's seminal paper [32], which started interest in disjunctive logic programming [30, 10]. For
example, the rule
lives in(x; US) - lives in(x; canada) - lives in(x; mexico) / lives in(x; n america) (1)
informally states that a person living in north America lives in one of the three countries there. The
semantical and computational aspects of disjunctive logic programming, and in particular, disjunctive
deductive databases, have been investigated in many papers (see [33] for an overview).
The results of this paper have been presented at the international workshop "Colloquium Logicum: Complexity," Vienna,
October 9-10, 1998. This work was partially supported by the Austrian Science Fund Project N Z29-INF, and by the British
Council-Austria ARC Programme for collaborative research (Oracle Computations within Descriptive Complexity Theory).
In this paper, we are interested in a restricted type of disjunction, which has been previously considered
e.g. in [6, 5, 12, 19, 15]. A data disjunction [19] is a ground clause R(-c 1
which all atoms are different and involve the same predicate R. For example, the head of the rule (1) for
Joe, is a data disjunction, as well as the disjunctive fact
loves(bill; monica) - loves(bill; hillary):
A data disjunction expresses indefinite information about the truth of a predicate on a set of arguments; in
database terminology, it expresses a null value on this predicate, whose range is given by the arguments
of its atoms. In the context of deductive databases, null values of this form in the extensional
database and their complexity have been considered e.g. in [20], and in many other papers.
If, in the above example, the fact lives in(joe; n america) is known, then the data disjunction
lives in(joe; US) - lives in(joe; canada) - lives in(joe; mexico)
can be derived from rule (1). If a clause C is entailed from a database, then also any clause C 0 subsumed
by C is entailed. For example, the clause C - lives in(joe; usbekistan) is entailed by virtue of C as
well. We thus adopt the natural condition that a data disjunction C must be minimal, i.e., no proper
subclause of C is entailed.
The question we address here is the complexity of data disjunctions in a disjunctive deductive database
(DDDB).
Table
1 summarizes the problems studied in this paper (see Section 3 for precise definitions),
and the main complexity results obtained. They complement previous results on reasoning from DDDBs.
Deciding whether an arbitrary disjunction, rather than a data disjunction, follows from a DDDB has \Pi Pdata and propositional complexity, and exponentially higher expression and combined complexity [14];
various syntactic restrictions lower the complexity to coNP or even polynomial time [9]. On the other
hand, evaluating a conjunctive query over a disjunctive extensional database is coNP-complete [20], and
hence deciding entailment of a single ground atom a has coNP data and propositional complexity. Thus,
data disjunctions have intermediate complexity between arbitrary clauses and single atoms.
Observe that Table 1 contains also results on actually computing a data disjunction (assuming at most
one exists). While all the results in this table could be derived in the standard way, i.e., by proving
membership in class C and reducing a chosen C-hard problem to the problem in question, we pursue here
an "engineering" perspective of complexity analysis in databases, proposed e.g. in [18], which utilizes
tools from descriptive and succinct complexity theory and exploits properties of the deductive database
semantics. By means of these tools, hardness results can be derived at an abstracted level of consideration,
without the need for choosing a fixed C-hard problem. Such tools (in particular, complexity upgrading)
have been developed for decision problems, but are not available for function problems. We overcome
this by generalizing the tools for propositional problems in a suitable way.
Thus, the main contributions of this paper can be summarized as follows. Firstly, we determine the
complexity of data disjunctions. We obtain natural and simple logical inference problems complete for
the class \Theta P
2 of the refined polynomial hierarchy [45], and, in their computational variants, complete
problems for the function classes FP NP
k and FL NP
log [log] and their exponential analogs. Secondly, we
provide upgrading techniques for determining the complexity of function computations. They generalize
available tools for decisional problems and may be fruitfully applied in other contexts as well.
The rest of this paper is organized as follows. Section 2 states preliminaries, and Section 3 formalizes
the problems. In Section 4, the decision problems are considered, while Section 5 is devoted to computing
Data Disjunction
Input: A disjunctive deductive database a collection of (possibly disjunctive) ground
facts, and - are the inference rules, plus a distinguished relation symbol R.
propositional complexity data complexity expression complexity combined complexity
9DD: does DB have a data disjunction on R?
\Theta P
PSpace NP PSpace NP
9!DD: does DB have a unique data disjunction on R?
\Theta P
PSpace NP PSpace NP
-DD: Computation of the unique data disjunction on R.
FPSpace NP FPSpace NP
k -DD: Computation of the unique data disjunction on R, if it has at most k disjuncts (k constant).
log [log] FL NP
log [log] FPSpace NP [pol] FPSpace NP [pol]
Table
1: Complexity of Data Disjunctions.
data disjunctions. Logical characterizations of function computations are given through a generalization
of the Stewart Normal Form (SNF) [38, 39, 17], which has been first used to characterize the class \Theta P
.
For deriving the expression and combined complexity of function computations, upgrading results are
developed in Section 6. The final Section 7 applies the results to the area of closed-world reasoning and
gives some conclusions.
Preliminaries
2.1 Deductive databases
For a background on disjunctive deductive databases, we refer to [30].
Syntax. A finite relational language is a tuple where the R i are relation
symbols (also called predicate symbols) with associated arities a , and the c i are constant sym-
bols. An atom is a formula of the form R i (-v), where - v is a tuple of first order variables and constant
symbols.
A disjunctive datalog rule is a clause of the form
over a finite relational language, where the a i 's are atoms forming the head of the clause, and the b j 's are
atoms or inequalities of the form u (where u and v are variables or constants) forming the body of
the clause.
A disjunctive deductive program (short program) is a finite collection of disjunctive datalog rules; it is
ground, if no variables occur in the rules.
If a predicate symbol occurs only in rule bodies, it is called an input predicate, otherwise it is called a
derived predicate.
A disjunctive deductive program with input negation is a program where input predicates are allowed
to appear negated.
A ground fact is a clause of the form
a 1
where a 1 is a variable-free atom; a disjunctive ground fact is a clause of the form
a 1 - a n
where the a i 's are variable-free atoms.
A disjunctive deductive database (DDDB) is a tuple E) where - is a program, and E is a
finite set of disjunctive ground facts. Here, E represents the input database, also called the extensional
part, and - are inference rules, called the intensional part of the database DB.
Remark: Note that -; E, and - [ E are all disjunctive deductive programs, i.e., ground facts can be
included into the programs, and in fact we shall do this for defining the semantics. However, for methodological
and complexity issues, it is important to distinguish the input data from the inference rules. For
example, the complexity of evaluating DB is exponentially lower when - is fixed. In section 3, we shall
define data and query complexity to give a formal meaning to this intuition.
Semantics. The semantics of DDDBs has been defined in terms of their minimal models [32, 30]. For a
E), we denote by HU DB its Herbrand universe, i.e., the set of all constants occurring in
DB. The Herbrand base HB DB (resp., disjunctive Herbrand base DHB DB ) is the set of all ground atoms
disjunctive ground facts) of predicates in DB over HU DB . The ground instantiation of a program -
over a set of constants C is denoted by ground(-; C); the ground instance of DB, denoted ground(DB),
is ground(-; HB DB
An (Herbrand) interpretation of DB is a subset H ' HB DB . An interpretation H of DB is a model of
DB, if it satisfies each rule in ground(DB) in the standard sense. A model H of DB is minimal, if it does
not contain any other model of DB properly; by MM(DB) we denote the set of all minimal models of
DB. We write DB is true in every M 2 MM(DB), and say that ' is entailed from
DB.
Example 2.1 Let is the rule q(x) / p(x) and E contains the single disjunctive
fact are among the models of DB; M is
while M is not. The minimal models of DB are fq(b)gg.
Remark. It is easy to see [32] that for each positive clause C , DB only iff DB
where is satisfaction in all models of DB. We will repeatedly use this fact.
The set of minimal models of DB has been characterized in terms of a unique least model-state MS
(see [30]), i.e., a subset of DHB DB , which can be computed by least fixpoint iteration of an operator T S
generalizing the standard T P operator of logic programming [29]. In general, the computation of MS
takes exponential space and time, even if the program - of DB is fixed.
2.1.1 Negation
Introducing negation in disjunctive deductive databases is not straightforward, and gave rise to different
semantics, cf. [33]. We restrict here to input negation, i.e., the use of negated atoms :R( - t) in rule bodies
where R is an extensional predicate, and adopt a closed-world assumption (CWA) on models imposing the
following condition: any accepted model M of E), restricted to the extensional part, must be a
minimal model of E. Unless stated otherwise, a model of a DDDB must satisfy this kind of closed-world
assumption.
Observe that this condition is satisfied by each M 2 MM(DB) if - is negation-free; furthermore, if E
contains no disjunctive facts, then :R(-c) is true in every M 2 MM(DB) iff
As for complexity, it is easy to see that checking whether the restriction of M to its extensional part is
a minimal model of E is possible in polynomial time. Hence, the complexity of model checking and of
deciding DB does not increase through the CWA on models. Furthermore, if E is restricted to
disjunction-free ground facts, input negation can be eliminated in computation as follows.
Definition 2.1 Let - be a finite relational language, and let -
the class of all finite - 0 structures A where for all relations R in - , R 0 A is the complement of R A .
Proposition 2.1 Extending a given -structure to its corresponding NEG - 0 structure and replacing literals
in a program - by R 0 possible in LOGSPACE.
In the derivation of hardness results, we shall consider DDDBs E) using input negation but
where E is disjunction-free. Hence, all hardness results in this paper hold for DDDBs without negation
and non-disjunctive (i.e., relational) facts as well.
2.2 Complexity
In this section, we introduce some of the more specific complexity classes and notions employed in
the paper; we assume however some familiarity with basic notions of complexity theory such as oracle
computations, NP, PSpace, L etc.
The class \Theta P
2 contains the languages which are polynomial-time truth-table reducible to sets in NP. It
has a wide range of different characterizations [45, 21]. In particular, the following classes coincide with
\Theta P
polynomial time computation with k rounds of parallel queries to an NP oracle [27].
log : polynomial time computation where the number of queries to an NP oracle is at most logarithmic
in the input size [25].
log : logarithmic space computation where the number of queries to an NP oracle is at most logarithmic
in the input size [26]. 1
1 Observe that the space for the oracle tape is not bounded. Unbounded oracle space is also assumed for all other classes
using an oracle in this paper.
III FL NP
log
log
IV
II
log [log]
log [log]
QUERY
I
Figure
1: Function classes corresponding to \Theta P
.
For an overview of different characterizations and their history, consult [45, 21]. It is shown in [21, 4,
37, 40] that this picture changes when we turn to function computation. The above mentioned list gives
rise to at most three presumably different complexity classes FP NP
log , and FL NP
log , which are shown
in
Figure
1. Here, for any function class FC, we denote by FC[log] the restriction of FC to functions with
logarithmic output size. Moreover, k[k] denotes k rounds of parallel queries, where k is a constant.
The relationships between the complexity classes in Figure 1 have been attracting quite some research
efforts, which led to a number of interesting results.
ffl II=III is equivalent to
ffl I=II is equivalent to the property that SAT is O(log n) approximable. This was shown in [2],
after I)II was proved in [8]. (Here f -approximability of a set A means that there is a function
g such that for all x holds that g(x
ffl Furthermore, if I=II, then (1SAT,SAT), i.e., promise SAT, is in P [4, 40], FewP=P, NP=R [37],
n), and NP ' DTIME(2 n O(1= log log n)
To compare the complexity of functions, and to obtain a notion of completeness in function classes, we
use Krentel's notion of metric reducibility [25]:
Definition 2.2 A function f is metric reducible (- mr -reducible) to a function g (in symbols, f - mr g),
if there is a pair of polynomial-time computable functions h 1 and h 2 such that for every x,
Proviso 1. Let C be a complexity class. Unless stated otherwise, we use the following convention:
C-completeness is defined with respect to LOGSPACE reductions, if C is a class of decision problems,
and with respect to metric reductions, if C is a class of function problems.
Some complete problems for function classes are shown in Figure 1. The canonical FP NP
-complete
problem is QUERY, i.e., computing the string -(I 1
SUPREMUM is computing, given a Boolean formula F string
there is a satisfying assignment to the variables of F such that x
SIZE is computing the size of a maximum clique in a given graph. Note that this problem is also complete
for FP NP
log . All these problems, turned into proper decision problems, are \Theta P
2 -complete. In particular,
deciding whether the maximum clique size in a graph is even and deciding whether the answer string to
QUERY contains an even number of 1's are \Theta P
-complete, cf. [45].
2.3 Queries and descriptive complexity
Definition 2.3 Let - be a finite relational language, and let fRg be a language containing a single
relational symbol R. A query Q is a function which maps -structures to ffi-structures over the same
domain, s.t. Q(A) and Q(B) are isomorphic, if A and B are isomorphic. If R is nullary, then Q is a
Boolean query.
A Boolean query Q is regarded as a mapping from -structures to f0; 1g s.t. for isomorphic A; B,
Remarks. (1) If we disregard nonelementary queries, we can identify queries with higher order definable
relations. (2) Note that "query" is also used for oracle calls. (3) Since queries are functions, we shall
also write them as sets of pairs (A; Q(A)).
Definition 2.4 Let - be a finite relational language with a distinguished binary relation succ, and two
constant symbols min; max. Then SUCC - is the set of all finite structures A with at least two distinct
elements where succ A is a successor relation on jAj, and min A ; max A are the first and last element wrt
the successor relation, respectively.
Note that queries are not defined over SUCC - , but over arbitrary -structures; this is called "order
independence" of queries. Many query languages however seem to require a built-in order for capturing
complexity classes, i.e., capturing requires that the -structures are extended by a contingent ordering to
structures from SUCC - . Thus, when we write on ordered structures/databases, or on SUCC - , we mean
that the queries are computed on -structures which are extended to SUCC - structures.
The following theorems provide examples of this phenomenon.
Definition 2.5 A SNF formula (Stewart Normal Form) is a second-order formula of the form
where ff and fi are \Pi 1
1 second order formulas with equality having the free variables -
y. An SNF sentence is
a SNF formula without free variables. The Skolem functions for the variables -
x are called SNF witnesses.
Lemma 2.2 ([38, 39, 17]) Every \Theta P
2 -computable property on SUCC - is expressible as
where oe is a SNF sentence.
This result, in equivalent terms of first-order logic with NP-computable generalized quantifiers, is contained
for particular cases of generalized quantifiers in [38, 39], and was given for broad classes of generalized
quantifiers in [17].
On a structure A, a formula '(-x) with free variables -
x defines the relation ' A given by f-c j A '(-c)g.
A program - defines a relation R on A, if (-; on A. In particular, if
R is nullary, ' (resp., -) defines a property on A.
Lemma 2.3 (immediate from [13, 14]) Every \Pi 1
1 definable property ' on SUCC - is expressible by a
disjunctive datalog program - ' using input negation.
Remark: Note that Lemma 2.3 does not require inequalities in rule bodies, since inequality is definable
in the presence of order, cf. [14].
3 Data Disjunctions
Definition 3.1 Given a DDDB DB, a disjunctive ground fact R-c 1 - R-c n , n - 2, is called a data
disjunction, if
1. DB
2. for all S ae
In this case, we say that DB has a data disjunction on R.
A data disjunction can be seen as a kind of null value in a data base.
Example 3.1 The DDDB E)
has a data disjunction Pa - P b - P c.
Definition 3.2 Given a DDDB DB, the maximal disjunction on R (in symbols, md(DB;R)) is the disjunctive
ground fact
R-c:
Lemma 3.1 DB has a data disjunction on R if and only if DB
Proof. If DB has a data disjunction ffi on R, then no atom R-c of ffi is implied by DB. Therefore, ffi is a
subclause of md(DB;R), and thus DB md(DB;R). On the other hand, if DB
clearly md(DB;R) is not empty. Either md(DB;R) is a data disjunction itself, or atoms of md(DB;R)
can be removed until a minimal disjunction ffi is reached such that DB by definition no
atomic subformula of md(DB;R) is implied by DB, ffi must contain at least two different atoms. J
In measuring the complexity of data disjunctions, we distinguish several cases following Vardi's [41]
distinction between data complexity, expression complexity (alias program complexity), and combined
complexity.
Definition 3.3 The problems 9DD, 9!DD, -DD, and k -DD are defined as follows:
Instance: A DDDB E), and a relation symbol R.
Query: 9DD: Does DB have a data disjunction on R ?
9!DD: Does DB have a unique data disjunction on R ?
-DD: Compute the unique data disjunction on R if it exists, and # otherwise.
k -DD: Compute the unique data disjunction on R, if it exists and has at most k disjuncts,
and # otherwise.
Observe that 9DD, called ignorance test in [5], has been used in [5, 6] to discriminate the expressive
power of different query languages based on nonmonotonic logics over sets of disjunctive ground facts.
Problem 9!DD corresponds to the unique satisfiability problem. The uniqueness variant of a problem has
often different complexity.
Definition 3.4 Let \Pi be one of 9DD, 9!DD, -DD, or k -DD.
ffl The data complexity of \Pi is the complexity of \Pi with parameter - fixed.
ffl The expression complexity of \Pi is the complexity of \Pi with parameter E fixed.
ffl The propositional complexity of \Pi is the complexity of \Pi where - is ground.
ffl The (unconstrained) complexity of \Pi is also called the combined complexity of \Pi.
Problem \Pi has combined (or propositional) complexity C , if \Pi is C-complete with respect to combined
complexity. \Pi has data (or expression) complexity C , if \Pi is in C with respect to
data (resp. expression) complexity for all choices of the parameter, and \Pi is C-complete with respect to
data (resp. expression) complexity for a particular choice of the parameter.
4 Existence of Data Disjunctions
Theorem 4.1 Let Q be a fixed Boolean query. Over ordered databases, the following are equivalent:
1. Q is \Theta P
-computable.
2. Q is definable by a SNF sentence.
3. There exist a program - and a relation symbol R s.t. (-; A) has a data disjunction over R iff
A
4. Q is equivalent to a SNF sentence whose SNF witnesses are uniquely defined.
5. There exist a program - and a relation symbol R s.t. (-; A) has a unique data disjunction over R
iff A
Proof. The equivalence of 1: and 2: is stated in [17]. A close inspection of the proof in [17] shows
that in fact 1: is also equivalent to 4:
3: ! 1:: By Lemma 3.1, the following algorithm determines if DB has a data disjunction on R:
Algorithm DDExistence(DB; R)
1: M := ;;
2: for all R-c 2 HBDB
3: if not (DB
4: ' :=
5: if DB return true else return false;
Note that in line 4, ' equals md(DB;R). The algorithm DDExistence works in polynomial time and
makes two rounds of parallel queries to an NP oracle, and thus the problem is in P NP
.
5: ! 1:: The following algorithm DDUniqueness is an extension of DDExistence.
Algorithm DDUniqueness(DB; R)
02: for all R-c 2 HBDB
03: if not (DB
05: if not (DB
07: for all R-c 2 M
08: if not (DB
10: if DB
Note that lines 1 to 4 coincide with DDExistence. On line 5, the algorithm terminates if no data
disjunction exists. Otherwise, all possible data disjunctions are subclauses of
to 9 construct a subclause /; it contains all those literals R-c of md(DB;R) in N which cannot be removed
from md(DB;R) without destroying the data disjunction, i.e., it contains those literals which necessarily
appear in every data disjunction. Thus, if ' is a data disjunction, it is the unique one. On the other hand,
if a unique data disjunction exists, it is by construction equal to '.
Like the algorithm DDExistence, this algorithm also works in polynomial time making a constant
number of rounds of parallel queries to an NP oracle. Hence, the problem is in \Theta P
.
2: ! 3:: Let ' be a formula of the form
By Lemma 2.3 there exist programs -A and -B containing predicate symbols A and B such that for all
Let - be the program -A [ -B with the additional rules
Observe that P does not occur in -A [ -B , and thus, by well-known modularity properties [14, Section
5], the minimal models of - on A are obtained by extending the minimal models of -A [ -B on A.
It is easy to see that has a data disjunction on P if and only if there exists a tuple - c on A
such that A indeed computes property ' on SUCC - .
From the equivalence of 2: and 4:, it follows that the data disjunction of program - in the
proof of 2: ! 3: is unique. J
Corollary 4.2 The propositional and data complexity of 9DD and 9!DD are in \Theta P
. The expression and
combined complexity of 9DD and 9!DD are in PSpace NP .
Proof. It remains to consider expression and combined complexity. When the program is not fixed,
the size of HB DB is single exponential in the input, and thus the algorithm DDExistence takes exponentially
more steps. Thus, the problem is in EXPTIME NP
k , which coincides with PSpace NP [18]. J
Since \Theta P
2 has complete problems, we obtain from Theorem 4.1 the following.
Corollary 4.3 There is a program - for which 9DD and 9!DD are \Theta P
Hence, we obtain the announced result.
Theorem 4.4 The data complexity and propositional complexity of 9DD and 9!DD is \Theta P
.
Note that the propositional complexity of 9DD has been stated in [12]. The hardness proof there,
given by a standard reduction, is far more involved; this indicates the elegance of using the descriptional
complexity approach.
Since the data complexity of a query language is uniquely determined by its expressive power, two
languages with the same expressive power will always have the same data complexity. Hence, data
complexity is a property of semantics. Expression and combined complexity, however, depend on the
syntax of the language. Therefore, it is in general not possible to determine the expression complexity of
a query language L from its expressive power. Indeed, both the syntax and the semantics of L impact on its
expression complexity. In spite of these principal obstacles, the typical behavior of expression complexity
was often found to respect the following pattern: If L captures C , then the expression complexity of L is
hard for a complexity class exponentially harder than C .
The main result of [18] shows that all query languages satisfying simple closure properties indeed
match the above observation. Suppose that in a database, domain elements are replaced by tuples of
domain elements. This operation is natural when a database is redesigned; for instance, entries like "John
Smith" in a database A can be replaced by tuples ("John","Smith") in a database B. It is natural to
expect that a query QA over A can be easily rewritten into an equivalent query Q B over B. We call Q B a
vectorized variant of QA . This is the essence of the first closure property:
Vector Closure: A query language is uniformly vector closed, if the vectorized variants of query expressions
can be computed in LOGSPACE.
The second closure property is similar. Suppose again that a database A is replaced by a database B
in such a way that all relations of A can be defined by views which use only unions and intersections
of relations in B. Then, it is again natural to expect that a query QA over A can be translated into an
equivalent query Q B over B. In this case, we call Q B an interpretational variant of QA .
Interpretation Closure: A query language is uniformly interpretation closed, if the interpretational variants
of queries can be computed from the database schemata in LOGSPACE.
In conclusion, we have the following closure condition (see [18] for a formal definition).
Definition 4.1 A query language is uniformly closed, if it is uniformly vector closed and uniformly interpretation
closed.
Lemma 4.5 ([18]) The language of DDDBs is uniformly closed.
Combining Corollary 4.2 and the following Proposition 4.6, we obtain the expression and combined
complexity of 9DD and 9!DD.
Proposition 4.6 ([18]) If a language is uniformly closed, and expresses all \Theta P
2 properties of SUCC - ,
then its expression complexity and combined complexity are at least PSpace NP .
Theorem 4.7 The expression complexity and the combined complexity of 9DD and 9!DD is PSpace NP .
5 Computation of Data Disjunctions
5.1 Data complexity and propositional complexity
Theorem 5.1 Let Q be a fixed query. Then, over ordered databases the following are equivalent.
1. Q is FP NP
computable.
2. Q is definable by a SNF formula.
3. There exist a program - and a relation symbol R s.t. Q(A) is polynomial time computable from
the unique data disjunction of (A; -) on R.
Proof. 1: ! 2:: The problem of deciding whether a given tuple - c on A fulfills -
easily
seen to be in \Theta P
2 . Thus, Lemma 2.2 implies there is a SNF sentence oe such that A; - c
(provide -
c through a designated singleton relation R - c , and use 9-yR - c (-y) to access -
c). Hence, there is an
2: ! 3:: Similarly as in the proof of Theorem 4.1, let ' be the SNF formula
having the free variables - y. By Lemma 2.3 there exist programs -A and -B containing predicate symbols
A and B such that for all A(-c; -
d) 2 HB -A and B(-c; -
d) 2 HB -B
d) iff A
d) and (A; -B )
d) iff A
We have to construct a program - whose unique data disjunction on input A over relation R contains the
information about all tuples in ' A . To this end, consider the program in Figure 2. Let a be the arity of A
and B there. Then the new relation symbols T and S also have arity a, and R has arity a+2. The program
Figure
2: DDDB program for FP NP
queries.
requires that a successor relation over tuples is available. The lexicographical successor relation can be
easily defined using datalog rules. Lines 1 to 3 enforce that S(-c) holds for at least one - c. Consequently,
the unique data disjunction on - c is the positive clause containing all possible S(-c) on A.
Consider lines 1 to 4 now. If the program contained only these rules, then line 4 would enforce that the
unique data disjunction on R would be the clause
d
d; max; max):
Lines 5 and 6, however, remove certain literals from this clause. In particular, it holds that A
d) iff
there is a - c on A such that A
d) iff there is a - c on A such that the (unique) data disjunction
of (-; A) on R contains the clause R(-c; -
d; min; min)-R(-c; -
d; min; max) but not R(-c; -
d; max; max).
Therefore, ' A is polynomial time computable from the unique data disjunction on R.
3: ! 1:: The algorithm DDUniqueness in the proof of Theorem 4.1 computes the unique data disjunction
in its variable /, provided it exists. It is easily modified to output # in the other case. J
The data and propositional complexity of -DD is an easy corollary to this result.
Corollary 5.2 Problem -DD has data complexity and propositional complexity FP NP
Definition 5.1 A domain element query (DEQ) is a query whose answer relation is a singleton, i.e., a
query Q s.t. for all A it holds that
Theorem 5.3 Let Q be a fixed DEQ. Then, over ordered databases the following are equivalent:
1. Q is FP NP
2. Q(A) is a tuple of SNF witnesses to a SNF sentence.
3. There exist a program - and a relation symbol R s.t. Q(A) is definable as a projection of the unique
data disjunction over R, where the data disjunction contains at most two atoms.
4. There exist a program - and a relation symbol R s.t. Q(A) is polynomial time computable from the
unique data disjunction over R where the data disjunction contains at most two atoms.
Proof. 1: ! 2:: For constant -
c, the problem of deciding whether - c 2 -(A) is easily seen to be in
\Theta P
. Thus, there is a SNF formula '(-x) having the free variables -
x for the constants s.t. A; - c
-(A). By Theorem 4.1 we may w.l.o.g. suppose that the SNF witnesses in ' are unique.
2: ! 3:: Let ' be a SNF sentence
which has unique SNF witnesses. Then, the program for ' in part 2: ! 3: of the proof of Theorem 4.1
has a unique data disjunction of the form P (-c; -
d; max). From this data disjunction, the
projection on the -
d tuple yields the desired result.
3:
M be the polynomial time Turing Machine which computes the answer from the unique
data disjunction. Then, we use the algorithm DDUniqueness in the proof of Theorem 4.1 which computes
the unique data disjunction in its variable /. It remains to check if the data disjunction is small, and to
simulate M . By assumption, the result has logarithmic size.
The data and propositional complexity of k -DD is an immediate corollary to this result.
Corollary 5.4 The data complexity and propositional complexity of k -DD are FL NP
log [log].
At this point, the question arises whether we could not have surpassed the reduction in the proof of
2: ! 3: in Theorem 5.1, by exploiting the completeness result on k -DD. The next result tells us that
this is (presumably) not possible, and that disjunctions not bounded by a constant are needed in general
to have hardness for FP NP
Proposition 5.5 Problem -DD is metric reducible to k -DD with respect to data complexity, for some
only if FP NP
log .
Proof. !: Suppose -DD is metric reducible to k -DD. Then, Corollary 5.2 implies that k -DD
is metric complete for FP NP
k . Since FL NP
log [log] ' FP NP
log , this implies that FP NP
log ) where
denotes the closure of C under metric reductions. Clearly, - mr
log
log , and thus
log holds. Combined with FP NP
log ' FP NP
(cf.
Figure
1), it follows that FP NP
log .
/: Suppose that FP NP
. Let f be any function complete for FP NP
log (such an f exists). Then,
by hypothesis. We use the following fact: Every function f in FP NP
log is - mr -reducible to
some function g in FL NP
log [log]. Indeed, Krentel showed that his class OptP[O(log n)] satisfies FP NP
log '
- mr (OptP[O(log n)]), and that CLIQUE SIZE (cf. Section 2) is OptP[O(log n)]-complete [25]. Since
CLIQUE SIZE is clearly in FP NP
log [log], the claimed fact follows by transitivity of - mr . This
fact and Corollary 5.2, together with the hypothesis FP NP
k imply that -DD - mr f - mr g - mr
-DD. By transitivity of - mr , we obtain -DD - mr k -DD. J
5.2 Expression and combined complexity
Finally, we determine the expression complexity of computing the unique data disjunction.
Theorem 5.6 The expression and combined complexity of -DD is FPSpace NP , and the expression and
combined complexity of k -DD is FPSpace NP [pol].
The proof of the theorem uses succinct upgrade techniques for function problems whose inputs are
given in succinct circuit description. These techniques are described in detail in the following section.
6 Problems with Succinct Inputs
6.1 Previous work and methodology
A problem is succinct, if its input is not given by a string as usual, but by a Boolean circuit which computes
the bits of this string. For example, a graph can be represented by a circuit with 2n input gates, such that
on input of two binary numbers v; w of length n, the circuit outputs if there is an edge from vertex v to
vertex w. In this way, a circuit of size O(n) can represent a graph with 2 n vertices. Suppose that a graph
algorithm runs in time polynomial in the number of vertices. Then the natural algorithm on the succinctly
represented graph runs in exponential time. Similarly, upper bounds for other time and space measures
can be obtained.
The question of lower bounds for succinct problems has been studied in a series of papers about circuits
[35, 22, 31, 24, 3, 7, 44], and also about other forms of succinctness such as representation by Boolean
formulas or OBDDs [42, 43]. The first crucial step in these results is a so-called conversion lemma. It
states that reductions between ordinary problems can be lifted to reductions between succinct problems:
Here, s(A) denotes the succinct version of A, X and Y denote suitable notions of reducibility, where
- Y is transitive.
For the second step, an operator 'long' is introduced which is antagonistic to s in the sense that it
reduces the complexity of its arguments. For a binary language A, long(A) can be taken as the set of
strings w whose size jwj written as a binary string is in A. Contrary to s, long contains instances which
are exponentially larger than the input to A. For a complexity class C , long(C) is the set of languages
long(A) for all A 2 C . It remains to show a second lemma:
Compensation Lemma A - Y s(long(A)).
Then the following theorem can be derived:
Theorem Let C be complexity classes such that long(C 1 let A be C 2 -hard under
reductions. Suppose that the Conversion Lemma and the Compensation Lemma holds. Then s(A) is
reductions.
Proof. To show C 1 -hardness, let B be an arbitrary problem in C 1 . By assumption, long(B) 2 C 2 ,
and therefore, long(B) - X A. By the Compensation Lemma, B - Y s(long(B)), and by the Conversion
Lemma, we obtain s(long(B)) - Y s(A). Since - Y is transitive, s(A) is C 1 -hard. J
6.2 Queries on succinct inputs
For any -structure A, let enc(A) denote the encoding of A by a binary string. The standard way to
encode A is to fix an order on the domain elements, and to concatenate the characteristic sequences of all
relations in A. 2 All Turing machine based algorithms (and in particular, all reductions) in fact work on A.
Therefore, we shall usually identify A and enc(A) without further notice. We use the further notation:
ffl enc(-) denotes the binary language of all encodings of finite -structures.
ffl char(A) is the value of the binary number obtained by concatenating a leading 1 with enc(A).
Given a binary circuit C with k input gates, gen(C) denotes the binary string of size 2 k obtained by
evaluating the circuit for all possible assignments in lexicographical order.
The idea of succinct representation is to represent enc(A) in the form gen(C). To overcome the
mismatch between the fact that the size of enc(A) can be almost arbitrary, while the size of gen(C) has
always the form 2 k , we use self-delimiting encodings:
Definition 6.1 Let . The self-delimiting encoding of w is defined as
1). For a number n, denotes the binary
representation of the number n.
Thus, from a string sd(w)v, the string w can be easily retrieved by looking for the first 1 at an even
position in the string.
Definition 6.2 ([42]) For a binary language L, let sd(L) denote the language
2 The characteristic sequence of a relation is the binary string which for all tuples in lexicographical enumeration describes
membership in the relation by 1, and non-membership by 0; for graphs, this means writing down the adjacency matrix line by
line.
Thus, sd(L) is the language obtained from L by adding the length descriptor and then some dummy string
that pads its size to a power of 2.
Definition 6.3 An FPLT function f is computed by two polylogarithmic time bounded deterministic Turing
Machines N and M , such that on input x, N computes the size of the output jf(x)j, and on input x
and i, M computes the i-th bit of f(x).
A PLT reduction is a reduction computed by an FPLT function.
Modulo PLT reductions, self-delimiting encoding is equivalent to standard encoding:
Lemma 6.1 ([42]) For a nonempty binary language L, L j PLT sd(L).
In particular, this means that there exists an FPLT function extract, which extracts a word from its
self-delimiting encoding.
Definition 6.4 Let F be a query on -structures. The succinct version s(F ) of F is given by
If denote the corresponding -structure, otherwise gen - (C) denotes
some default -structure.
Using gen - , we can rephrase the definition of s(F ) as follows:
The weak reducibility needed for the antecedent of the conversion lemma is given by so-called forgetful
metric reductions; they differ from metric reductions in that the complexity of the inner function is
restricted to FPLT, and that the outer ("forgetful") function may not access the original input.
Definition 6.5 A function f is forgetfully metric reducible to a function g (in symbols, f - mr
f g),
if there is an FPLT function h 1 and a polynomial time computable function h 2 such that for every x,
It is not hard to see that - mr
f is transitive. The crucial observation needed to generalize the results
about succinct decision problems to succinct function problems is that the succinct representation affects
only the inner computation in the metric reduction (i.e., h 1 ), because the result of the succinct function
s(F ) is not succinct. Thus, if we are able to lift the inner reductions from ordinary instances to succinct
instances, then we can leave the outer computation (i.e., h 2 ) unchanged. This lifting is achieved by the
following lemma:
Lemma 6.2 (immediate from [44]) Let f be a FPLT function which maps -structures to oe-structures.
Then there exists an FPLT function F s.t. for all circuits C
With this background, the conversion lemma is easy to show:
Lemma 6.3 (Conversion Lemma) Let F be a query over -structures, and G be a query over oe-structures.
f s(G).
Proof. By assumption we have F We have to show that there exist an FPLT
function H 1 and a polynomial time function H 2 such that
By Lemma 6.2 there is an H 1 s.t. h 1 (gen -
Then we can set H and the lemma is proven. J
It remains to define a suitable long operator. Recall that it has to simplify the complexity of its argu-
ment. Following [42], we obtain the following definition for long on queries:
Definition 6.6 Let (R 1 ) be a signature with a single unary relation symbol, and let Q be a convex
query over signature - . Then the query long(Q) is defined as follows:
where char(A) is the value of the binary number obtained by concatenating a leading 1 with the characteristic
sequence of the tuples in A in lexicographical order.
Lemma 6.4 (Compensation Lemma) Let F be a query. Then F - mr
Proof. As in Lemma 6.3, it is sufficient to show that every input T of F can be translated into a
This was shown (using somewhat different terminology) in [42,
Lemma 6]. J
Theorem 6.5 Let F 1 be two classes of functions, such that long(F 1 . If a query F is hard for
f -reductions, then s(F ) is hard for F 1 under - mr
f -reductions.
6.3 Succinctness and expression complexity
Succinct problems and expression complexity are related by the following methods, which was used in
[23, 14] and generalized in [18]:
Suppose that a language L can express a C 2 -complete property A. Then its data complexity is trivially
. If the language is rich enough to simulate a Boolean circuit by a program of roughly the same size,
then it is possible to combine a program for A with a program for circuit simulation, thus obtaining a
program for s(A). Consequently, there is a reduction from s(A) to the expression complexity of L.
In [14], it was shown how a negation-free DDDB can simulate a Boolean circuit: Let
be a boolean circuit that decides a k-ary predicate R over f0; 1g, i.e., for any tuple
supplied to C as input, a designated output gate of C , which we assume is g t , has value 1 iff
We describe a program -C that simulates C using the universe f0; 1g. For each gate g i , -C uses a k-ary
predicate G i , where G i (-x) informally states that on input of tuple -
x to C , the circuit computation sets the
output of g i to 1. Moreover, it uses a propositional letter False , which is true in those models in which
the G i do not have the intended interpretation; none of these models will be minimal.
The clauses of -C are the following ones. For each gate of C , it contains the clause
Depending on the type a i , -C contains for additional clauses:
The clauses (00) ensure that if a model of ground(-C ; f0; 1g) contains False , it is the maximal interpretation
(which is trivially a model of -C ). In fact, this is the only model of -C that contains False. Let
MC denote the interpretation given by
takes value 1 on input
t to C g.
Lemma 6.6 ([14]) For any Boolean circuit C , MC is the unique minimal model of -C .
Theorem 6.7 Problem -DD is complete for FPSpace NP .
Proof. Define as usual a problem whose input contains a uniform circuit with constant input
gates which generates the instance to be solved. Then it is easy to see that FP NP
k contains a query
Q which is complete under - mr
f -reductions. (For example, QUERY is such a problem: the function
h 1 of any metric reduction g - mr QUERY can be shifted inside the oracle queries in polylog-time,
and the bits of the input string x provided through dummy oracle queries to h 2 .) It is not hard to see
that long(FLinSpace NP
, and therefore by Theorem 6.5, s(Q) is complete for FLinSpace NP .
By standard padding arguments, completeness for FLinSpace NP implies completeness for FPSpace NP .
Thus, it remains to reduce s(Q) to -DD.
By Lemma 6.6, a circuit C with k input gates can be converted into a disjunctive program -C whose
k-ary output relation R describes the string gen(C). Consider the query Q(extract(A)), where A is
an ordered input structure which describes a string by a unary relation. Since Q(extract(A)) is easily
seen to be in FP NP
k , Theorem 5.1 implies that there is a program - whose data disjunction describes the
result of Q(extract(A)). Since DDDBs are uniformly closed, - can be rewritten into a program - 0 whose
input relation R has arity k. As in the proof of Theorem 5.1 we can assume that there is a lexicographical
successor relation on k-tuples. By well-known modularity properties [14, Section 5], the program - 0 [-C
indeed computes Q on the succinctly specified input. J
It is not hard to show that also FL NP
log has - mr
f -complete queries (e.g., a variant of CLIQUE SIZE in
which circuits computing the functions h of a metric reduction to CLIQUE SIZE are part of the
problem instance). The following theorem is then shown analogously:
Theorem 6.8 Problem k -DD is complete for FPSpace NP [pol].
7 Further Results and Conclusion
7.1 Closed world reasoning
The results on data disjunctions that we have derived above have an immediate application to related
problems in the area of closed-world reasoning.
Reiter [36] has introduced the closed-world assumption (CWA) as a principle for inferring negative
information from a logical database. Formally,
For example, CWA(fP (a); (a)g. It follows from results in [11] that
computing CWA(DB) has propositional complexity FP NP
Observe that CWA(DB) may not be classically consistent (under Herbrand interpretations); for exam-
ple, CWA(fP - which has no model. As shown in [11], deciding whether
CWA(DB) is consistent is in \Theta P
2 and coNP-hard in the propositional case; the precise complexity of this
problem is open.
In a refined notion of partial CWA (cf. [16]), which is in the spirit of protected circumscription [34],
only atoms A involving a particular predicate P or, more general, a predicate P from a list of predicates
P may be negatively concluded from DB:
(b)g.
Definition 7.1 (P-minimal model) Let P be a list of predicates. The preorder -P on the models of a
DDDB DB is defined as follows: M -P M 0 , for every P (-c) 2 HB DB such that P 2 P it holds that
. A model M is P-minimal for DB, if there exists no model M 0 such that
We remark that a P-minimal model is a special case of the notion of model [28], given
by an empty list of fixed predicates in a circumscription.
Proposition 7.1 Let DB be a DDDB and P a predicate. Then, the following statements are equivalent:
1. DB has a data disjunction on P .
2. PCWA(DB;P ) is not consistent (with respect to classical Herbrand models).
3. DB does not have a global P -minimal model M , i.e., M - P M 0 for all models M 0 of DB.
Proof. 1: ! 2:: Suppose 2, is a data disjunction of DB. Then,
implies that
which means PCWA(DB;P ) is not consistent.
Suppose DB has a global P -minimal model M . Then, for each atom P (-c) 2 HB DB it
holds that DB 6j= P (-c) iff M 6j= P (-c), since M has the unique smallest P -part over all models of DB.
Hence, M is model of PCWA(DB;P ), and thus PCWA(DB;P ) is consistent.
3: ! 1:: Suppose DB has no global P -minimal model. Let M be the collection of all P -
minimal models M of DB, where w.l.o.g. M 1 6- P M 2 and M 2 6- P M 1 . Let X be the set of all atoms
in arbitrary atoms. Then,
holds for every
which means DB contains a data
disjunction on predicate P (which contains P
As for P-minimality, a list P of predicates can, by simple coding, be replaced with a single predicate
where the first argument in P codes
the predicate. This coding is compatible with P-minimality, i.e., P-minimal and P -minimal models
correspond as obvious. From Proposition 7.1 and the results of the previous sections, we thus obtain the
following result.
Theorem 7.2 Deciding consistency of PCWA(DB;P) and existence of some global P-minimal model of
DB have both \Theta P
propositional and data complexity, and PSpace NP program and expression complexity.
By the same coding technique, the result in Theorem 7.2 holds even if the language has only two
predicates and P contains a single predicate. On the other hand, if the language has a only one predicate
, then the existence of a data disjunction on p is equivalent to the consistency of CWA(DB), whose
precise complexity is open.
7.2 Restricted data disjunctions
In [15], a stronger notion of data disjunction R-c is considered, which requests in addition that
all disjuncts R-c i are identical up to one argument of the list of constants - c i ; we call such data disjunctions
restricted. Note that all data disjunction considered in Section 1 are restricted.
For the problems reformulated to restricted data disjunctions, Table 1 in Section 1 is the same except
that the expression and combined complexity of -DD is FPSpace NP [pol]. Indeed, a restricted data
disjunction C has at most m disjuncts where is the number of constants, and thus -DD has
log n) many output bits in the combined complexity case, where n is the size of DB. The number of
maximal disjunctions md(DB;R), adapted to restricted data disjunctions, is polynomial in the data size,
and thus the same upper bounds can be easily derived as for unrestricted data disjunctions. All hardness
results are immediate from the proofs except for propositional and data complexity of -DD; here, mapping
of elements to newly introduced (polynomially many) domain elements is a suitable technique
for adapting the construction in Figure 2 in the proof of Theorem 5.1.
Finally, we remark that Lemma 2.3 remains true even if all disjunctions in the program - ' describe
restricted data disjunctions. Thus, by a slight adaptation of the programs in proofs and exploiting the fact
that disjunction-free datalog with input negation is sufficient for upgrading purposes [14], the complexity
results for restricted data disjunction remain true even if all disjunctions in DB must be restricted data
disjunctions.
7.3 Conclusion
In this paper, we have considered the complexity of some problems concerning data disjunctions in deductive
databases. To this aim, we have taken an "engineering perspective" on deriving complexity results
using tools from the domain of descriptive complexity theory, and combined them with results for upgrading
complexity results on normal to succinct representations of the problem input. In particular, we
have also investigated the complexity of actually computing data disjunctions as a function, rather than
only the associated decision problem. This led us to generalize upgrading techniques developed for decision
problems to computations of functions. These upgrading results, in particular Theorem 6.5, may be
conveniently used in other contexts.
The tools as used and provided in this paper allow for a high-level analysis of the complexity of prob-
lems, in the sense that establishing certain properties and schematic reductions are sufficient in order to
derive intricate complexity results as eg. for the case of data disjunctions in a clean and transparent way,
without the need to deal with particular problems in reductions. While this relieves us from spelling
out detailed technical constructions, the understanding of what makes the problem computationally hard
may be blurred. In particular, syntactical restrictions under which the complexity remains the same or is
lowered can not be immediately inferred. We leave such considerations for further work. Another interesting
issue for future work is the consideration of computing data disjunctions viewed as a multi-valued
function, which we have not done here.
Acknowledgment
We are grateful to Iain Stewart and Georg Gottlob for discussions and remarks.
--R
Foundations of Databases.
Sparse sets
The complexity of algorithmic problems on succinct instances.
Autoepistemic logics as a unifying framework for the semantics of logic programs.
Querying disjunctive databases through nonmonotonic logics.
Succinct circuit representations and leaf languages are basically the same concept.
Six hypotheses in search of a theorem.
The complexity of propositional closed world reasoning and circumscription.
Semantics of logic programs: Their intuitions and formal properties.
Propositional circumscription and extended closed world reasoning are
The complexity class
Normal forms for second-order logic over finite structures
Disjunctive datalog.
A tractable class of disjunctive deductive databases.
Logical Foundations of Artificial Intelligence.
Relativized logspace and generalized quantifiers over ordered finite structures.
Succinctness as a source of complexity in logical formalisms.
Complexity of query processing in databases with or-objects
Computing functions with parallel queries to NP.
The computational complexity of graph problems with succinct multigraph representation.
Why not negation by fixpoint
Vector language: Simple descriptions of hard instances.
The complexity of optimization problems.
Relativization questions about logspace computability.
Comparison of polynomial-time reducibilities
Computing circumscription.
Foundations of Logic Programming.
Foundations of Disjunctive Logic Programming.
The complexity of graph problems for succinctly represented graphs.
On indefinite data bases and the closed world assumption.
Logic and databases: A 20 year retrospective.
Computing protected circumscription.
A Note on succinct representations of graphs.
On closed-world databases
A taxonomy of complexity classes of functions.
Logical characterizations of bounded query classes I: Logspace oracle machines.
Logical characterizations of bounded query classes II: Polynomial-time oracle machines
On polynomial-time truth-reducibilities of intractable sets to p-selective sets
The complexity of relational query languages.
Languages represented by Boolean formulas.
How to encode a logical structure by an obdd.
Succinct representation
Bounded query classes.
--TR
A note on succinct representations of graphs
Logical foundations of artificial intelligence
Foundations of logic programming; (2nd extended ed.)
The complexity of optimization problems
Complexity of query processing in databases with OR-objects
The complexity of graph problems for succinctly represented graphs
Vector language: simple description of hard instances
On truth-table reducibility to SAT
Bounded query classes
Why not negation by fixpoint?
Foundations of disjunctive logic programming
Propositional circumscription and extended closed-world reasoning are MYAMPERSANDPgr;<supscrpt>p</supscrpt><subscrpt>2</subscrpt>-complete
The complexity of algorithmic problems on succinct instances
A taxonomy of complexity classes of functions
Computing functions with parallel queries to NP
Querying disjunctive databases through nonmonotonic logics
Succinct circuit representations and leaf language classes are basically the same concept
Disjunctive datalog
Languages represented by Boolean formulas
Succinct representation, leaf languages, and projection reductions
Foundations of Databases
The Complexity Class Theta2p
Logic and Databases
On Indefinite Databases and the Closed World Assumption
Six Hypotheses in Search of a Theorem
How to Encode a Logical Structure by an OBDD
The complexity of relational query languages (Extended Abstract) | deductive databases;conversion lemma;complexity upgrading;computational complexity;data disjunction |
606913 | Guarded fixed point logics and the monadic theory of countable trees. | Different variants of guarded logics (a powerful generalization of modal logics) are surveyed and an elementary proof for the decidability of guarded fixed point logics is presented. In a joint paper with Igor Walukiewicz, we proved that the satisfiability problems for guarded fixed point logics are decidable and complete for deterministic double exponential time (E. Grdel and I. Walulkiewicz, Proc. 14th IEEE Symp. on Logic in Computer Science, 1999, pp. 45-54). That proof relies on alternating automata on trees and on a forgetful determinacy theorem for games on graphs with unbounded branching. The exposition given here emphasizes the tree model property of guarded logics: every satisfiable sentence has a model of bounded tree width. Based on the tree model property, we show that the satisfiability problem for guarded fixed point formulae can be reduced to the monadic theory of countable trees (SS), or to the -calculus with backwards modalities. | Introduction
Guarded logics are dened by restricting quantication in rst-order logic,
second-order logic, xed point logics or innitary logics in such a way that,
semantically speaking, each subformula can 'speak' only about elements that
are 'very close together' or `guarded'.
Email address: graedel@informatik.rwth-aachen.de (Erich Gradel).
URL: www-mgi.informatik.rwth-aachen.de (Erich Gradel).
Preprint submitted to Elsevier Preprint 5 October 2000
Syntactically this means that all rst-order quantiers must be relativised by
certain 'guard formulae' that tie together all the free variables in the scope of
the quantier. Quantication is of the form
where quantiers may range over a tuple y of variables, but are 'guarded' by
a formula that must contain all the free variables of the formula that is
quantied over. The guard formulae are of a simple syntactic form (in the basic
version, they are just atoms). Depending on the conditions imposed on guard
formulae, one has logics with dierent levels of 'closeness' or `guardedness'.
Again, there is a syntactic and a semantic view of such guard conditions.
Let us start with the logic GF, the guarded fragment of rst-order logic, as it
was introduced by Andreka, van Benthem, and Nemeti [1].
Denition 1.1. GF is dened inductively as follows:
(1) Every relational atomic formula Rx i 1
or x belongs to GF.
(2) GF is closed under boolean operations.
(3) If x; y are tuples of variables, (x; y) is a positive atomic formula, and
(x; y) is a formula in GF such that free( ) y, then
also the formulae
belong to GF.
Here free( ) means the set of free variables of . An atom (x; y) that rel-
ativizes a quantier as in rule (3) is the guard of the quantier. Hence in
GF, guards must be atoms. But the really crucial property of guards (also for
the more powerful guarded logics that we will consider below) is that it must
contain all free variables of the formula that is quantied over.
The main motivation for introducing the guarded fragment was to explain
and generalize the good algorithmic and model-theoretic properties of propositional
modal logics (see [1,26]). Recall that the basic (poly)modal logic ML
(also called propositional logic by the possibility to construct formulae
hai and [a] (for any a from a given set A of 'actions' or `modalities')
with the meaning that holds at some, respectively each, a-successor of the
current state. (We refer to [4] or [22] for background on modal logic).
Although ML is formally a propositional logic we really view it as a fragment
of rst-order logic. Kripke structures, which provide the semantics for modal
logics, are just relational structures with only unary and binary relations.
There is a standard translation taking every formula 2 ML to a rst-order
formula (x) with one free variable, such that for every Kripke structure
K with a distinguished node w we have that K; w if and only if K
(w). This translation takes an atomic proposition P to the atom Px, it
commutes with the Boolean connectives, and it translates the modal operators
by quantiers as follows:
where E a is the transition relation associated with the modality a. The modal
fragment of rst-order logic is the image of propositional modal logic under
this translation. Clearly the translation of modal logic into rst-order logic uses
only guarded quantication, so we see immediately that the modal fragment
is contained in GF. The guarded fragment generalizes the modal fragment
by dropping the restrictions to use only two variables and only monadic and
binary predicates, and retains only the restriction that quantiers must be
guarded.
The following properties of GF have been demonstrated [1,10]:
(1) The satisability problem for GF is decidable.
(2) GF has the nite model property, i.e., every satisable formula in the
guarded fragment has a nite model.
(3) GF has (a generalized variant of) the tree model property.
Many important model theoretic properties which hold for rst-order
logic and modal logic, but not, say, for the bounded-variable fragments
do hold also for the guarded fragment.
(5) The notion of equivalence under guarded formulae can be characterized
by a straightforward generalization of bisimulation.
Further work on the guarded fragment can be found in [7{9,17]. Based on this
kind of results Andreka, van Benthem, and Nemeti put forward the 'thesis'
that it is the guarded nature of quantication that is the main reason for the
good model-theoretic and algorithmic properties of modal logics.
Let us discuss to what extent this explanation is adequate. One way to address
this question is to look at the complexity of GF. We have shown in
[10] that the satisability problem for GF is complete for 2Exptime, the
class of problems solvable by a deterministic algorithm in time 2 2 p(n)
, for some
polynomial p(n). This seems very bad, in particular if we compare it to the
well-known fact that the satisability problem for propositional modal logic
is in Pspace [19]. But dismissing the explanation of Andreka, van Benthem,
and Nemeti on these grounds would be too supercial. Indeed, the reason for
the double exponential time complexity of GF is just the fact that predicates
may have unbounded arity (wheras ML only expresses properties of graphs).
Given that even a single predicate of arity n over a domain of just two element
leads to 2 2 n
possible types already on the atomic level, the double exponential
lower complexity bound is hardly a surprise. Further, in most of the potential
applications of guarded logics the arity of the relation symbols is bounded.
But for GF-sentences of bounded arity, the satisability problem can be decided
in Exptime [10], which is a complexity level that is reached already for
rather weak extensions of ML (e.g. by a universal modality) [25]. Thus, the
complexity analysis does not really provide a decisive answer to our question.
To approach the question from a dierent angle, let us look at extensions of
ML. Indeed ML is a very weak logic and the really interesting modal logics
extend ML by features like path quantication, temporal operators, least and
greatest xed points etc. which are of crucial importance for most computer
science applications. It has turned out that many of these extended modal logics
are algorithmically still manageable and actually of considerable practical
importance. The most important of these extensions is the modal -calculus
, which extends ML by least and greatest xed points and subsumes most of
the modal logics used for automatic verication including CTL, LTL, CTL ,
PDL, and also many description logics. The satisability problem for L is
known to be decidable and complete for Exptime [5]. Therefore, a good test
for the explanation put forward by Andreka, van Benthem, and Nemeti is the
following problem:
If we extend GF by least and greatest xed points, do we still get a decidable
logic? If yes, what is its complexity? To put it dierently, what is the penalty,
in terms of complexity, that we pay for adding xed points to the guarded
In [13] we were able to give a positive answer to this question. The model-theoretic
and algorithmic methods that are available for the -calculus on
one side, and the guarded fragments of rst-order logic on the other side, can
indeed be combined and generalized to provide positive results for guarded
xed point logics. (Precise denitions for these logics will be given in the next
section.) In fact we could establish precise complexity bounds.
Theorem 1.2 (Gradel, Walukiewicz). The satisability problems for
guarded xed point logics are decidable and 2Exptime-complete. For guarded
xed point sentences of bounded width the satisability problem is Exptime-
complete.
By the width of a formula , we mean the maximal number of free variables
in the subformulae of . For sentences that are guarded in the sense of GF,
the width is bounded by the maximal arity of the relation symbols, but there
are other variants of guarded logics where the width may be larger. Note that
for guarded xed point sentences of bounded width the complexity level is the
same as for -calculus and for GF without xed points.
The proof that we give in [13] relies on alternating two-way tree automata
(on trees of unbounded branching), on a forgetful determinacy theorem for
parity games, and on a notion of tableaux for guarded xed point sentences,
which can be viewed as tree representations of structures. We associate with
every guarded xed point sentence an alternating tree automaton A that
accepts precisely the tableaux that represent models for . This reduces the
satisability problem for guarded xed point logic to the emptiness problem
for alternating two-way tree automata.
In this paper we discuss other variants of guarded logics, with more liberal notions
of guarded quantication and explain alternative possibilities to design
decision procedures for guarded xed point logics. Already in [3], van Benthem
had proposed loosely guarded quantication (also called guarded
quantication) as a more general way of restricting quantiers, and proved
that also LGF, the loosely guarded fragment of rst-order logic, remains de-
cidable. Here we motivate and introduce clique guarded quantication, which
is even more liberal than loosely guarded quantication, but retains the same
decidability properties.
The techniques for establishing decidability results for guarded xed point
logics that we explain in this paper exploit a crucial property of such logics,
namely the (generalized variant of the) tree model property, saying that every
satisable sentence of width k has a model of tree width at most k 1. The tree
width of a structure is a notion coming from graph theory which measures how
closely the structure resembles a tree. Informally a structure has tree width
k, if it can be covered by (possibly overlapping) substructures of size at
most are arranged in a tree-like manner. The tree model property
for guarded logics is a consequence of their invariance under a suitable notion
of bisimulation, called guarded bisimulation.
Guarded bisimulations play a fundamental role for characterizing the expressive
power of guarded logics, in the same way as usual bisimulations are crucial
for understanding modal logics. For instance, the characterization theorem by
van Benthem [2], saying that a property is denable in propositional modal
logic if and only if it is rst-order denable and invariant under bisimulation.
has a natural analogue for the guarded fragment: GF can dene precisely the
model classes that are rst-order denable and invariant under guarded bisimulation
[1]. We will explain and prove a similar result for the clique-guarded
fragment in Section 3. There is a similar and highly non-trivial characterisation
theorem for the modal -calculus, due to Janin and Walukiewicz [18],
saying that the properties denable in the modal -calculus are precisely the
properties that are denable in monadic second-order logic and invariant under
bisimulation. And, as shown recently by Gradel, Hirsch, and Otto [12],
this result also carries over to the guarded world. Indeed, there is a natural
fragment of second-order logic, called GSO, which is between monadic second-order
logic and full second-order logic, such that guarded xed point logic is
precisely the bisimulation-invariant portion of GSO.
Outline of the paper. In Sect. 2 we discuss dierent variants of guarded
logics. In particular we introduce the notion of clique-guarded quantication
and present the precise denitions and elementary properties of guarded xed
point logics. In Sect. 3 we explain the notions of guarded bisimulations, of
tree width and of the unraveling of a structure. We prove a characterization
theorem for the clique-guarded fragment and discuss the tree model property
of guarded logics. Based on the tree model property we will present in Sect. 4.2
a simpler decidability proof for guarded xed point logic that replaces the
automata theoretic machinery used in [13] by an interpretation argument into
the monadic second-order theory of countable trees (S!S) which by Rabin's
famous result [23] is known to be decidable. We then show in Sect. 4.3 that
instead of using S!S, one can also reduce guarded xed point logic to the
-calculus with backwards modalities which has recently been proved to be
decidable (in Exptime, actually) by Vardi [27].
We remark that this paper is to a considerable extent expository. The new
results, mostly concerning clique-guarded logics, can also be derived using
the automata theoretic techniques of [13]. However, it is worthwhile to make
the role of guarded bisimulations explicit and to show how one can establish
decidability results for guarded xed point logics via reductions to well-known
formalisms such as S!S or the -calculus. Even if the automata theoretic
method gives more ecient algorithms, the reduction technique provides a
simple and high-level method for proving decidability, avoiding any explicit
use of automata-theoretic machinery (the use of automata is hidden in the
decision algorithms for S!S or the -calculus). For the convenience of the
reader, we have included explicit proofs of some facts that are known | or
straightforward variations of known results | but where the proofs are hard
to nd.
Guarded logics
There are several ways to dene more general guarded logics than GF. On one
side, we can consider other notions of guardedness, and on the other side we
can look at guarded fragments of more powerful logics than rst-order logic.
We rst consider other guardedness conditions.
Loosely guarded quantication. The direct translation of temporal formulae
say over the temporal frame (N ; <), into rst-order logic
is
which is not guarded in the sense of Denition 1.1. However, the quantier 8z
in this formula is guarded in a weaker sense, which lead van Benthem [3] to
the following generalization of GF.
Denition 2.1. The loosely guarded fragment LGF is dened in the same
way as GF, but the quantier-rule is relaxed as follows:
is in LGF, and (x; is a conjunction of
atoms, then
belong to LGF, provided that free( ) any two
variables z 2 y, z 0 2 x[ y there is at least one atom j that contains both
z and z 0 .
In the translation of ( until ') described above, the quantier 8z is loosely
guarded by (x z^z < y) since z coexists with both x and y in some conjunct
of the guard. On the other side, the transitivity axiom
Exz) is not in LGF. The conjunction Exy ^ Eyz is not a proper guard of
8xyz since x and z do not coexist in any conjunct. Indeed, it has been shown
in [10] that there is no way to express transitivity in LGF.
Clique-guarded quantication. In this paper we introduce a new, even
more liberal, variant of guarded quantication, which leads to what we may
call clique-guarded logics. To motivate this notion, let us look at the semantic
meaning of guardedness.
Denition 2.2. Let B be a structure with universe B and vocabulary . A
set guarded in B if there exists an atomic formula
Note that every singleton set
guarded (by the atom is guarded
some guarded set X.
Clearly, sentences of GF can refer only to guarded tuples. Consider now the
LGF-sentence
While the rst quantier is guarded even in the sense of GF, the second one
is only loosely guarded: the quantied variables u; v coexist in an atom of the
guard (in fact in both of them) and they also coexist with each of the other
z in one of the atoms. The subformula ' can hence talk about
quadruples (y; z; u; v) in a structure, that are not guarded in the sense of the
denition just given, but in weaker sense.
The corresponding semantic denition of a loosely guarded set in a structure
B is inductive.
Denition 2.3. A set X is loosely guarded in the structure B if it either is
a guarded set, or if there exists a loosely guarded set X 0 such that for every
there is a guarded set Y with fa; bg Y X.
For instance in the structure
and a ternary relation consisting of the triples (a; b; c); (b; d; e); (c; d; e), the set
fb; c; d; eg is loosely guarded. Note that the elements of a loosely guarded set
need not coexist in a single atom of the structure, but they are all 'adjacent'
in the sense of the locality graph or Gaifman graph of a structure.
Denition 2.4. The Gaifman graph of a relational structure B (with universe
B) is the undirected graph
there exists a guarded set X B with a; a 0 2 Xg:
set X of elements of a structure B is clique-guarded in B if it induces a
clique in G(B). A tuple (b clique-guarded if its components
form a clique-guarded set.
Lemma 2.5. Every loosely guarded set is also clique-guarded.
Proof. We proceed by induction on the denition of a loosely guarded set. If
X is guarded, then it is obviously also clique-guarded. Otherwise there exists
a loosely guarded set X 0 and, for every a 2 X X 0 , b 2 X a guarded set
containing both a and b. Hence all such a and b are connected in G(B). It
remains to consider elements b that are both contained in X [ X 0 . In that
case, a and b are connected in G(B) because, by induction hypothesis, X 0
induces a clique in G(B).
The converse is not true, as the following example shows. Consider a structure
and one ternary relation
R containing the triangles (a 1 ; a
is neither guarded nor loosely guarded, but induces a clique in G(A).
Note that for each nite vocabulary and each k 2 N , there is a positive, existential
rst-order formula clique(x
B and all b
induce a clique in G(B):
Denition 2.6. The clique-guarded fragment CGF of rst-order logic is dened
in the same way as GF and LGF, but with the clique-formulae as guards.
Hence, the quantication rule for CGF is
(3) 00 If (x; y) is a formula CGF, then
belong to CGF, provided that free( ) y.
Note that quantiers over tuples are in principle no longer needed in CGF
(contrary to GF and LGF), since they can be written as sequences of clique-
guarded quantiers over single variables.
Alternative denitions of CGF. In practice, one will of course not write
down the clique-formulae explicitly. One possibility is not to write them down
at all, i.e., to take the usual (unguarded) rst-order syntax and to change
the semantics of quantiers so that only clique-guarded tuples are considered.
More precisely: let '(x; y) be a rst-order formula, B a structure and a be
a clique-guarded tuple of elements of B. Then B there exists
an element b such that (a; b) is clique-guarded and B
for universal quantiers. It is easy to see that, for nite vocabularies, this
semantic denition of CGF is equivalent to the one given above.
An alternative possibility is to permit as guards any existential positive formul
(x) that implies clique(x). This is what Maarten Marx [20,16] uses in
his denition of the packed fragment PF. The dierences between the clique-
guarded fragment and the packed fragment are purely syntactical. PF and
CGF have the same expressive power. The work of Maarten Marx and ours
have been done independently.
Every LGF-sentence is equivalent to a CGF-sentence. (The analogous statement
for formulae is only true if we impose that the free variables must be
interpreted by loosely guarded tuples. However, in this paper, we restrict attention
to sentences.) We observe that CGF has strictly more expressive power
than LGF.
Proposition 2.7. The CGF-sentence 8xyz(clique(x; Rxyz) is not
equivalent to any sentence in LGF.
The good algorithmic and model-theoretic properties of GF go through also
for LGF and CGF. In most cases, in particular for decidability, for the characterization
via an appropriate notion of guarded bisimulation and for the
tree model property, the proofs for GF extend without major diculties. An
exception is the nite model property which, for GF, has been established in
[10], and where the extension to LGF and CGF, recently established by Ian
Hodkinson [15], requires considerable eort.
Notation. We use the notation
i.e., we write guarded formulae in the form
When this notation is used, then it is always understood that is indeed a
proper guard as specied by condition (3), (3) 0 , or (3) 00 .
Guarded xed point logics. We now dene guarded xed point logics,
which can be seen as the natural common extensions of GF, LGF and CGF
on one side, and the -calculus on the other side.
Denition 2.8. The guarded xed point logics GF; LGF, and CGF are
obtained by adding to GF; LGF, and CGF, respectively, the following rules
for constructing xed point formulae:
Let W be a k-ary relation a k-tuple of distinct vari-
ables, and (W; x) be a guarded formula that contains only positive occurrences
of W , no free rst-order variables other than x
is not used in guards. Then we can build the formulae
The semantics of the xed point formulae is the usual one: Given a structure
A providing interpretations for all free second-order variables in , except W ,
the formula (W; x) denes an operator on k-ary relations W A k , namely
Since W occurs only positively in , this operator is monotone (i.e., W W 0
implies A (W ) A (W 0 )) and therefore has a least xed point LFP( A )
and a greatest xed point GFP( A ). Now, the semantics of least xed point
formulae is dened by
A
and similarly for the greatest xed points.
Least and greatest xed point can be dened inductively. For a formula (W; x)
with k-ary relation variable W a structure A, and ordinals set
~
W for limit ordinals
The relations W (resp. ~
are called the stages of the LFP-induction (resp.
GFP-induction) of (W; x) on A. Since the operator A is monotone, we have
~
W +1 , and there exist ordinals ; 0 such that W
~
These are called the closure ordinals of the LFP-induction
resp. GFP-induction of (W; x) on A.
Finite and countable models. Contrary to GF, LGF, CGF, and also to
the modal -calculus, guarded xed point logics do not have the nite model
property [13].
Proposition 2.9. Guarded xed point logic GF (even with only two vari-
ables, without nested xed points and without equality) contains innity axioms
Proof. Consider the conjunction of the formulae
9xyFxy
The rst two formulae say that a model should contain an innite F -path
and the third formula says that F is well-founded, thus, in particular, acyclic.
Therefore every model of the three formulae is innite. On the other side, the
are clearly satisable, for instance by (!; <).
While the nite model property fails for guarded xed point logics we recall,
for future use, that the Lowenheim-Skolem property is true even for the (un-
guarded) least xed point logic every satisable xed point
sentence has a countable model. This result is part of the folklore on xed
point logic, but it is hard to nd a published proof. Our exposition follows the
one in [6].
Theorem 2.10. Every satisable sentence in hence every
satisable sentence in GF, LGF, or CGF, has a model of countable
cardinality.
Proof. Consider a xed point formula of form (x) := [LFP Rx : '(R; x)](x),
with rst-order ' such that A j= (a) for some innite model A.
For any ordinal , let R be stage of the least xed point induction of ' on
A.
Expand A by a monadic relation U , a binary relation <, and a m
relation (where m is the arity of R) such that
(1) (U; <) is a well-ordering of length
1, and < is empty outside U .
describes the stages of ' A in the following way
; u is the -th element of (U; <),
and b 2 R g:
In the expanded structure A := (A; U; <; S) the stages of the operator ' A are
dened by the sentence
:= 8u8x
Here '[Ry=9z(z < is the formula obtained form '(R; x) by
replacing all occurrences of subformula Ry by 9z(z <
be a countable elementary substructure of A ,
containing the tuple a. Since A
the stages of ' B . Since also B j= 9uSua, it follows that a is contained in the
least xed point of ' B , i.e., B (a).
A straightforward iteration of this argument gives the desired result for arbitrary
nestings of xed point operators, and hence for the entire xed point
Guarded innitary logics. It is well known that xed point logics have a
close relationship to innitary logics (with bounded number of variables).
Denition 2.11. GF are the innitary variants of the
guarded fragments GF, LGF, and CGF, respectively. For instance GF 1 extends
GF by the following rule for building new formulae: If GF 1 is any
set of formulae, then also W and V are formulae of GF 1 . The denitions
for LGF 1 and CGF 1 are anologous.
In the sequel we explicitly talk about the clique-guarded case only, i.e., about
CGF and CGF 1 , but all results apply to the guarded and loosely guarded
case as well. The following simple observation relates CGF with CGF 1 .
Proposition 2.12. For each 2 CGF of width k and each cardinal
, there
is a 0 2 CGF 1 , also of width k, which is equivalent to on all structures up
to cardinality
Proof. Consider a xed point formula [LFP Rx : '(R; x)](x). For every ordinal
, there is a formula ' (x) 2 CGF 1 that denes the stage of the
induction of '. Indeed, let ' 0 (x) := false, let ' +1 (x) := '[Ry=' (y)](x),
that is, the formula that one obtains from '(R; x) if one replaces each atom
Ry (for any y) by the formula ' (y), and for limit ordinals , let ' (x) :=
On structures of bounded cardinality, also the closure ordinal of
any xed-point formula is bounded. Hence for every cardinal
there exists an
ordinal such that [LFP Rx : '(R; x)](x) is equivalent to ' (x) on structures
of cardinality at most
Remark. Without the restriction on the cardinality of the structures, this
result fails. Indeed there are very simple xed point formulae, even in the
modal -calculus (such as well-foundedness axioms), that are not equivalent
to any formulae of the full innitary logic L1! .
Guarded bisimulation and the tree model property
Tree width is an important notion in graph theory. Many dicult or undecidable
computational problems on graphs become easy on graphs of bounded
tree width. The tree width of a structure measures how closely it resembles a
tree. Informally, a structure has tree width k, if it can be covered by (pos-
sibly overlapping) substructures of size at most k + 1 which are arranged in a
tree-like manner. For instance, trees and forests have tree width 1, cycles have
tree width 2, and the n n-grid has tree width n. Here we need the notion
of tree width for arbitrary relational structures. For readers who are familiar
with the notion of tree width in graph theory we can simply say that the tree
width of a structure is the tree width of its Gaifman graph. Here is a more
detailed denition.
Denition 3.1. A structure B (with universe B) has tree width k if k is
the minimal natural number satisfying the following condition. There exists a
directed tree E) and a function
assigning to every node v of T a set F (v) of at most k +1 elements of B, such
that the following two conditions hold.
(i) For every guarded set X in B there exists a node v of T with X F (v).
(ii) For every element b of B, the set of nodes fv
connected (and hence induces a subtree of T ).
For each node v of T , F (v) induces a substructure F(v) B of cardinality at
most k + 1. (Since F (v) may be empty, we also permit empty substructures.)
is called a tree decomposition of width k of B.
Remark. A more concise, but equivalent, formulation of clause (i) would be
that
v2T F(v).
By denition, every guarded set X B is contained in some F (v). A simple
graph theoretic argument shows that the same is true for loosely guarded and
clique-guarded sets.
Lemma 3.2. Let hT; (F(v) v2T )i be a tree decomposition of B and X B be a
clique-guarded set in B. Then there exists a node v of T such that X F (v).
Proof. For each b 2 X, let V b be the set of nodes v such that b 2 F (v). By
the denition of a tree decomposition, each V b induces a subtree of T . For all
is non-empty, since b and b 0 are adjacent
in G(B) and must therefore coexist in some atomic fact that is true in B. It
is known that any collection of pairwise overlapping subtrees of a tree has a
common node (see e.g. [24, p. 94]). Hence there is a node v of the T such that
F (v) contains all elements of X.
Guarded bisimulations. The notion of bisimulation from modal logic generalises
in a straightforward way to various notions of guarded bisimulation
that describe indistinguishability in guarded logics. We focus here on clique-
bisimulations, the appropriate notion for clique-guarded formulae. The notions
of guarded or loosely guarded bisimulations can be dened analogously.
Denition 3.3. A clique-k-bisimulation, between two -structures A and B
is a non-empty set I of nite partial isomorphisms Y from A to B,
where X A and Y B are clique-guarded sets of size at most k, such that
the following back and forth conditions are satised. For every
I,
for every clique-guarded set X 0 A of size at most k there exists a
partial isomorphism in I such that f and g agree on X \ X 0 .
back: for every clique-guarded set Y 0 B of size at most k there exists a
partial isomorphism in I such that f 1 and g 1 agree on Y \Y 0 .
Clique-bisimulations are dened in the same way, without restriction on the
size of X; Y; X 0 and Y 0 . Two -structures A and B are clique-(k-)bisimilar if
there exists a cliqe-(k-)bisimulation between them. Obviously, two structures
are clique-bisimilar if and only if they are clique-k-bisimilar for all k.
Remark. One can describe clique-k-bisimilarity also via a guarded variant
of the innitary Ehrenfeucht-Frasse game with k pebbles. One just has to
impose that after every move, the set of all pebbled elements induces a clique
in the Gaifman graph of each of the two structures. Then A and B are clique-
k-bisimilar if and only if Player II has a winning strategy for this guarded
game.
Adapting basic and well-known model-theoretic techniques to the present sit-
uation, one obtains the following result.
Theorem 3.4. Let A and B be two -structures. The following are equivalent:
(i) A and B are clique-k-bisimilar.
(ii) For all sentences 2 CGF 1 of width at most k, A
Proof. (i) =) (ii): Let I be a clique-k-bisimulation between A and B, let
formula in CGF 1
with width at most k such that A
We show, by induction on , that there is no partial isomorphism f 2 I with
. By setting the claim follows.
If is atomic this is obvious, and the induction steps for
are immediate. Hence the only interesting case concerns formulae
of the form
Since A j= (a), there exists a tuple a 0 in A such that A j= clique(a; a
'(a; a 0 ). Suppose, towards a contradiction, that some f 2 I takes a to b. Since
the set a[a 0 is clique-guarded there exists a partial isomorphism g 2 I, taking
a to b and a 0 to some tuple b 0 in B. But then the tuple b[b 0 is clique-guarded
and B
contradicts the induction hypothesis.
(ii) =) (i): Let I be the set of all partial isomorphisms f : a 7! b, taking a
clique-guarded tuple a in A to a clique-guarded tuple b in B such that for all
of width at most k, A
and B cannot be distinguished by sentences of width k in CGF 1 , I contains
the empty map and is therefore non-empty. It remains to show that I satises
the back and forth properties.
For the forth property, take any partial isomorphism I and any
clique-guarded set X 0 in A of size at most k. Let X
g. We have to show that there exists a g 2 I,
dened on X 0 that coincides with f on X \ X 0 .
Suppose that no such exists. Let a = a
and let T be the set of all tuples b
s such that b[b
0 is clique-guarded
in B. Since there is no appropriate g 2 I there exists for every tuple b
(a; a 0
But then we can construct the formula
Clearly, A j= (a) but B which is impossible, given that f 2 I maps
a to b. The proof for the back property is analogous.
In particular, this shows that clique-(k-)bisimilar structures cannot be separated
by CGF-sentences (of width k).
Characterizing CGF via clique-guarded bisimulations. We show next
that the characterisations of propositional modal logic and GF as bisimulation-
invariant fragments of rst-order logic [1,2] have their counterpart for CGF
and clique-guarded bisimulation. The proof is a straightforward adaptation of
van Benthems proof for modal logic, but for the convenience of the reader,
we present it in full. However, we assume that the reader is familiar with the
notions of elementary extensions and !-saturated structures (see any textbook
on model theory, such as [14,21]). We recall that every structure has an !-
saturated elementary extension.
Theorem 3.5. A rst-order sentence is invariant under clique-guarded bisimulation
if and only if it is equivalent to a CGF-sentence.
Proof. We have already established that CGF-sentences (in fact even sentences
from CGF 1 ) are invariant under clique-guarded bisimulations. For the
converse, suppose that is a satisable rst-order sentence that is invariant
under clique-guarded bisimulations. Let be the set of sentences ' 2 CGF
such that It suces to show that j= . Indeed, by the compactness
theorem, already a nite conjunction of sentences from will then imply, and
hence be equivalent to, .
Since was assumed to be satisable, so is . Take any model A j= . We
have to prove that A j= . Let TCGF (A) be the CGF-theory of A, i.e., the set
of all CGF-sentences that hold in A.
is satisable.
Otherwise there were sentences such that
CGF-sentence implied by and is
therefore contained in . But then A which is impossible
since This proves the claim.
Take any model B be !-saturated
elementary extensions of A and B, respectively.
are clique-bisimilar.
Let I be the set of partial isomorphisms Y from clique-guarded
subsets of A + to clique-guarded subsets of B + such that, for all formulae
'(x) in CGF and all tuples a from X, we have that A +
'(fa). The fact that A are !-saturated implies that the back and
forth conditions for clique-guarded bisimulations are satised by I. Indeed,
let f 2 X, and let X 0 be any clique-guarded set in A + , with X 0 \
g. Let be the set of all formulae of form
For every formula '(fa; y) 2 , we have A
therefore Hence is a consistent type
of which is, by !-saturation, realized in B + by some xed tuple b
such that (fa; b) is clique-guarded. Hence the function g taking a to fa and
a 0 to b is a partial isomorphism with domain X 0 that coincides with f on
. The back property is proved in the same way, exploiting that A + is
!-saturated.
We can now complete the proof of the theorem. Since B
an elementary extension of B, we have that By assumption, is
invariant under clique-guarded bisimulations, so A + j= and therefore also
A
An analogous result applies to clique-k-bisimulations and CGF-sentences of
width k, for any k 2 N .
Unravelings of structures. The k-unraveling B (k) of a structure B is dened
inductively. We build a tree T , together with functions F and G such that
for each node v of T , F (v) induces a clique-guarded substructure F(v) B,
and G(v) induces a substructure G(v) B (k) that is isomorphic to F(v).
Further, hT; (G(v)) v2T i will be a tree decomposition of B (k) .
The root of T is , with F Given a node v of T with F
r g we create for every clique-guarded set
in B with s k a successor node w of v such that F
and G(w) is a set fb
s g which is dened as follows. For
those i, such that b
j so that G(w) has the same
overlap with G(v) as F (w) has with F (v). The other b
in G(w) are fresh
elements.
G(w) be the bijection taking b i to b
F(w) being the substructure of B induced by F (w), dene G(w) so that f w
is an isomorphism from F(w) to G(w). Finally B (k) is the structure with tree
decomposition hT; (G(v) v2T )i.
Note that the k-unraveling of a structure has tree width at most k 1.
Proposition 3.6. B and B (k) are k-bisimilar.
Proof. Let I be the set of functions f nodes v of T .
It follows that no sentence of width k in CGF 1 , and hence no sentence of width
k in CGF distinguishes between a structure and its k-unraveling. Since every
satisable sentence in CGF has a model of at most countable cardinality,
and since the k-unraveling of a countable model is again countable we obtain
the following tree model property for guarded xed point logic.
Theorem 3.7 (Tree model property). Every satisable sentence in CGF
with width k has a countable model of tree width at most k 1.
Remark. In fact the decision algorithms for guarded xed point logics imply
a stronger version of the tree model property, where the underlying tree has
branching bounded by O(j
4 Decision procedures
Once the tree model property is established, there are several ways to design
decision algorithms for guarded logics. We focus here on guarded xed point
logics (in fact on CGF which contains GF and LGF).
4.1 Tree representations of structures
Let hT; be a tree decomposition of width k 1 of a -structure D
with universe D. We want to describe D by a tree with a nite set of labels.
To this end, we x a set K of 2k constants and choose a function f : D ! K
assigning to each element d of D a constant a d 2 K such that the following
condition is satised. If v; w are adjacent nodes of T , then distinct elements
of F(v) [ F(w) are always mapped to distinct constants of K.
For each constant a 2 K, let O a be the set of those nodes v 2 T at which
the constant a occurs, i.e., for which there exists an element d 2 F(v) such
that a. Further, we introduce for each m-ary relation R of D a tuple
R := (R a of monadic relations on T with
R a := fv 2 T : there exist d
The tree E) together with the monadic relations O a and R a (for
called the tree structure T (D) associated with D (and, strictly
speaking, with its tree decomposition and with K and f ).
Lemma 4.1. Two occurrences of a constant a 2 K at nodes u; v of T represent
the same element of D if and only if a occurs in the label of all nodes on
the link between u and v. (The link between two nodes u; v in a tree T is the
smallest connected subgraph of T containing both u and v.)
An arbitrary tree E) with monadic relations O a and R does dene
a tree decomposition of width k 1 of some structure D, provided that the
following axioms are satised.
(1) At each node v, at most k of the predicates O a are true.
(2) Neighbouring nodes agree on their common elements. For all m-ary relation
we have the axiom
8x8y
a2a
(O a x ^ O a y)
(R a x $ R a y)
These are rst-order axioms over the vocabulary := fEg [ fO a : a 2
Kg. Given a tree structure T with underlying tree
E) and monadic predicates O a and R a satisfying (1) and (2), we obtain
a structure D such that T as follows. For every constant a 2 K, we
call two nodes u; w of T a-equivalent if T j= O a v for all nodes v on the link
between u and w. Clearly this is an equivalence relation on O T
a . We write [v] a
for the a-equivalence class of the node v. The universe of D is the set of all
a-equivalence classes of T for a 2 K, i.e.,
O a vg:
For every m-ary relation symbol R in , we dene
(and hence all) v 2
4.2 Reduction to S!S
We now describe a translation from CGF into monadic second-order logic
on countable trees. Given a formula '(x
we construct a monadic second-order formula ' a
(z) with
describe in the associated tree structure
T (D) the same properties of clique-guarded tuples as '(x) does in D. (We
will make this statement more precise below).
On a directed tree E) we can express that U contains all nodes on
the link between x and y by the formula
For any set a K we can then construct a monadic second-order formula
link a
a2a
O a z)
saying that the tuple a occurs at all nodes on the link between x and y. The
translation is now dened by induction as follows:
(1) If '(x) is an atom Sx i 1
(3) If '(x) := clique(x), let
clique a
a;a2a
9y(link a;a 0
_
_
(z).
' a (z) := 9y
link a
_
O
' a (z) := 8S
Here S is a tuple (S b ) b2K m of monadic predicates where m is the arity
of S.
Theorem 4.2. Let '(x) be a formula in CGF and D be a structure with
tree decomposition hT; For an appropriate set of constants K and
a (D) be the associated tree structure. Then, for
every node v of T and every clique-guarded tuple d F(v) with
Proof. We proceed by induction on '. The non-trivial cases are the clique-
guards, existential quantication and least xed points.
For the clique-guards, note that the translated formula clique a (v) says that
for any pair a; a 0 of components of a, there is a node w, such that
a; a 0 occur at all nodes on the link from v to w and hence represent the same
elements at w as they do at v.
some predicate R and some tuple b that contains both
a and a 0 . By induction hypothesis, this means that d; d 0 are components of
some tuple d 0 such that D
Hence T (D) clique a (v) if and only if the tuple d induces a clique in the
Gaifman graph G(D).
Suppose now that and that D
there exists a tuple d
0 such that D j= clique(d; d
By Lemma 3.2 there exists a node w of T such that all components of d [ d 0
are contained in F(w).
induction hypothesis it follows that
O
Let U be the set of nodes on the link between v and w. Then the tuple d
occurs in F(u) for all nodes u 2 U . It follows that T (D) link a
(v; w). Hence
Conversely, if T (D)
(v) then there exists a node w such that the constants
a occur at all nodes on the link between v and w (and hence correspond to
the same tuple d) and such that T (D) clique ab
(w) for some tuple
b. By induction hypothesis this implies that D j= clique(d; d
some tuple d 0 , hence D
Finally, let
and only if d is contained in every xed point of the operator D , i.e., is in
every relation S such that c)g.
We rst observe that, for guarded tuples d, this is equivalent to the seemingly
weaker condition that d is contained in every S such that c 2 S i D
for all guarded tuples c. Indeed, this is obvious since (S; x) is a Boolean
combination of quantier-free formulae not involving x, of positive atoms of
the form Su where u is a recombination of the variables appearing in x and
of formulae starting with a guarded existential quantier. Therefore the truth
values of Sc for unguarded tuples c never matters for the question whether a
given guarded tuple is in ' D (S).
Recall that the formula associated with '(x) and a is
' a (z) := (8S)
Consider any tuple of monadic relations on T (D) that satisies
the consistency axiom such that
This tuple S denes a relation S on D such that for all nodes w of T and all
tuples c in F(w) with Conversely, each
relation S on D denes such a tuple S of monadic relations on T (D) which
describes the truth values of S on all guarded tuples of D. Since T (D)
induction hypothesis that D
Further d 2 S if and only if v 2 S a .
Hence the formula ' a (v) is true in T (D) if and only if d is contained in all
relations S over D such that for all guarded tuples c, c 2 S i c 2 D (S).
By the remarks above, this is equivalent to saying that d is in the least xed
point of D .
Theorem 4.3. The satisability problem for CGF is decidable.
Proof. Let be a sentence in CGF of vocabulary and width k. We translate
into a monadic second-order sentence such that is satisable if and
only if there exists a countable tree E) with T
Fix a set K of 2k constants and let O be the tuple of monadic relations O a
for a 2 K. Further, for each m-relation symbol R 2 , let R be the tuple
of monadic relation R a where a 2 K m . The desired monadic second-order
sentence has the form
Here is the rst-order axiom expressing that the tree T expanded by the
relations O and R does describe a tree structure T (D) associated to some
-structure D. We have shown above that this can be done in rst-order logic.
The formula ; (x) is the translation of (and the empty tuple of constants)
into monadic second-order logic, as described by Theorem 4.2.
If is satisable, then by Theorem 3.7, has a countable model D of tree
width k 1. By Theorem 4.2, the associated tree structure T (D) satises
there exists a tree T such that T j= . Conversely, if
there exists an expansion which satises and
hence describes the tree decomposition of a -structure D. Since T
it follows by Theorem 4.2 that D j= .
The decidability of CGF now follows by the decidability of S!S, the monadic
second-order theory of countable trees, a famous result that has been established
by Rabin [23].
Note that while this reduction argument to S!S gives a somewhat more elementary
decidability proof (modulo Rabin's result, of course), it does not give
good complexity bounds. Indeed, even the rst-order theory of countable trees
is non-elementary, i.e. its time complexity exceeds every bounded number of
iterations of the exponential function.
4.3 Reduction to the -calculus with backwards modalities
Instead of reducing the satisability problem for CGF to the monadic second-order
theory of trees, we can dene a similar reduction to the -calculus with
backward modalities and then invoke Vardi's decidability result for this logic
[27].
For a set of actions A, the -calculus with backwards modalities L
, permits,
for each action a 2 A, besides the common modal operators hai and [a] also the
backwards operators ha i and [a ] corresponding to the backwards transitions
a := f(w; g. Hence ha i' is true at state w in a Kripke
structure K if and only if there exists a state v, such that K; v
reachable from v via action a.
Here we will need L
on trees (V; E) with only one transition relation. We can
write h+i, [+] for the forward modal operators, and hi, [ ] for the backwards
operators, and then use the abbreviations
Hence 3 and 2 are the usual modal operators on symmetric Kripke structures.
Finally, it is convenient for our reduction argument to permit the use of simultaneous
least and greatest xed points in L
. Let
a sequence of propositional variables, and
a sequence of L
-formulae in which all occurrences of X are posi-
tive. Then, for each i r, the expressions [X : '(X)] i and [X : '(X)] i are
formulae in L
On every Kripke structure K with universe V , the sequence '(X) denes an
operator ' K that maps any tuple to a new
tuple
r (S)) where ' K
Since the variables in X occur only positive in ', the operator ' K has a least
xed point
r ). Now, the semantics of simulteneous least
xed point formulae is given by
The meaning of a simultaneous greatest xed point
similarly. It is well-known that simultaneous xed points can be rewritten as
nestings of simple xed points, so the use of simultaneous xed points does
not change the expressive power of L
.
Theorem 4.4 (Vardi). Every satisable formula in L
has a tree model.
Further, the satisability problem for L
is decidable and Exptime-complete.
On connected Kripke structures (in particular on trees), the universal modality
is denable in L
. For every formula ', we write 8' to abbreviate the formula
It is easy to see that 8' is satised at some state of a connected
Kripke structure K if and only if ' is satised at all states of K.
Let D be a structure of bounded tree width, and let T (D) be its tree representation
as described in Sect. 4.1. We view T (D) as a Kripke structure,
with atomic propositions O a and R a . Having available an universal modality,
the axioms for tree representations T (D) given in the previos subsection, can
easily be expressed by modal formulae. For instance, the consistency axioms
can be written
a2a
O a
_
a2a
:O a _ R a
Theorem 4.5. Let D be a structure with tree decomposition hT;
For an appropriate set of constants K and a function f : D ! K, let T (D) be
the associated tree structure. For every formula '(x
every tuple a 2 K m we can construct a formula ' a 2 L
such that, for every
node v of T and every clique-guarded tuple d F(v) with
Proof. The translation is very similar to the translation into monadic second-order
logic that was given in the previous section.
(1) If '(x) is an atom Sx i 1
then ' a
true if a
(3) For the guard formulae clique(x), let
clique a
_
_ 3(O a ^ O a 0
' a
_
a2a
O a
' a
Here S is a tuple of xed point variables S b and (S) is the tuple of the
(S) for all b 2 K m (where m is the arity of S).
The proof that the translation is correct is analogous to the proof of Theorem
4.2.
We now get another proof for the decidability of guarded xed point logic.
Given a sentence 2 CGF, we translate it into the L
according
to Theorem 4.5 and take the conjunction with the consistency axioms in L
for tree representations T (D). Then use Vardi's decidability result for L
.
By the tree model property of L
, the tree model property of CGF and
Theorem 4.5 this gives a decision procedure for CGF.
However, it is not clear whether this argument can be modied to provide the
optimal complexity bounds for guarded xed point logic.
--R
Modal Correspondence Theory
Dynamic bits and pieces
Modal Logic
The complexity of tree automata and logics of programs
On the (in
A superposition decision procedure for the guarded fragment with equality
The two-variable guarded fragment with transitive relations
Model Theory
Loosely guarded fragment of
Interpolation in guarded fragments.
Beth De
On the expressive completeness of the propositional mu-calculus with respect to monadic second order logic
The computational complexity of provability in systems of propositional modal logic
Tolerance logic
Cours de th
First Steps in Modal Logic
Decidability of second-order theories and automata on in nite trees
Tree width and tangles: A new connectivity measure and some applications
Complexity of modal logics
Why is modal logic so robustly decidable?
--TR
First steps in modal logic
Beth Definability for the Guarded Fragment
Reasoning about The Past with Two-Way Automata
On the Expressive Completeness of the Propositional mu-Calculus with Respect to Monadic Second Order Logic
Guarded Fixed Point Logic
The Two-Variable Guarded Fragment with Transitive Relations
A Superposition Decision Procedure for the Guarded Fragment with Equality
Back and Forth between Guarded and Modal Logics
--CTR
Antje Nowack, A Guarded Fragment for Abstract State Machines, Journal of Logic, Language and Information, v.14 n.3, p.345-368, June 2005
Dirk Leinders , Maarten Marx , Jerzy Tyszkiewicz , Jan Bussche, The Semijoin Algebra and the Guarded Fragment, Journal of Logic, Language and Information, v.14 n.3, p.331-343, June 2005
Maarten Marx, Queries determined by views: pack your views, Proceedings of the twenty-sixth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 11-13, 2007, Beijing, China
Erich Grdel , Wolfgang Thomas , Thomas Wilke, Literature, Automata logics, and infinite games: a guide to current research, Springer-Verlag New York, Inc., New York, NY, 2002 | decidability;guarded logics;fixed point logics |
606915 | On an optimal propositional proof system and the structure of easy subsets of TAUT. | In this paper we develop a connection between optimal propositional proof systems and structural complexity theory--specifically, there exists an optimal propositional proof system if and only if there is a suitable recursive presentation of the class of all easy (polynomial time recognizable) subsets of TAUT. As a corollary we obtain the result that if there does not exist an optimal propositional proof system, then for every theory T there exists an easy subset of TAUT which is not T-provably easy. | Introduction
The first classification of propositional proof systems by their relative
efficiency was done by S. Cook and R. Reckhow [4] in 1979. The
key tool for comparing the relative strength of proof systems is p-
simulation. Intuitively a proof system h p-simulates a second one g if
there is a polynomial time computable function translating proofs in
into proofs in h. A propositional proof system is called p-optimal
if it p-simulates any propositional proof system. The question of the
existence of a p-optimal propositional proof system and its nondeterministic
counterpart, an optimal propositional proof system, posed
by J. Kraj'i-cek and P. Pudl'ak [9], is still open.
It is not known whether many-one complete languages for
exist. For these and other promise classes
no recursively enumerable representation of appropriate sets of Turing
machines is known. Moreover, J. Hartmanis, L. Hemachandra
in [5] and W. Kowalczyk in [7] pointed out that NP " co-NP and
UP possess complete languages if and only if there are recursive
enumerations of polynomial time clocked Turing machines covering
languages from these classes.
In this paper we show that the question of the existence of optimal
(p-optimal) propositional proof systems can be characterized in
a similar manner. The main result of our paper shows that optimal
proof systems for TAUT (the set of all propositional tautologies) exist
if and only if there is a recursive enumeration of polynomial time
clocked Turing machines covering all easy (recognizable in polynomial
time) subsets of TAUT . This means that the problem of the
existence of complete languages for promise classes and the problem
of the existence of optimal proof systems for TAUT , although
distant at first sight, are structurally similar. Since complete languages
for promise classes have been unsuccesfully searched for in
the past our equivalence gives some evidence of the fact that optimal
propositional proof systems might not exist.
Our result can be related to the already existing line of research
in computational complexity. After the revelation of the connection
between the existence of optimal proof systems and the existence of
many-one complete languages for promise classes in [12] and [15], this
subject has been intensively investigated. J. K-obler and J. Messner
in [8] formalized this relationship introducing the concept of test set,
and showed that the existence of p-optimal proof systems for TAUT
and for SAT (the set of all satisfiable boolean formulas) suffices
to obtain a complete language for NP " co-NP. J. Messner and
J. Tor'an showed in [12] that a complete language for UP exists in
case there is a p-optimal proof system for TAUT . We believe that our
results make the next step towards deeper understanding of this link
between optimal proof systems and complete languages for promise
classes.
The paper is organized as follows. In Section 2 we set down notation
that will be used throughout the paper. Background information
about propositional proof systems is presented in Section 3.
The problems of the existence of complete languages for the classes
" co-NP and UP and their characterization in terms of polynomial
time clocked machines covering languages from these classes
are presented in Section 4. In Section 5 we give a precise definition of
a family of propositional formulas which will be used in the proofs
of our main results. In Section 6 our main results are stated and
proved. In the last section we discuss corollaries arising from the
main results of the paper.
Preliminaries
We assume some familiarity with basic complexity theory, see [1].
The symbol \Sigma denotes, throughout the paper, a certain fixed finite
alphabet. The set of all strings over \Sigma is denoted by \Sigma ? . For a string
x, jxj denotes the length of x. For a language A ae \Sigma ? the complement
of A is the set of all strings that are not in A.
We use Turing machines (acceptors and transducers) as our basic
computational model. We will not distinguish between a machine
and its code. For a deterministic Turing machine M and an input w,
denotes the computing time of M on w. When M is
a nondeterministic Turing machine TIME(M , w) is defined only for
w's accepted by M and denotes the number of steps in the shortest
accepting computation of M on w. For a Turing machine M the
denotes the language accepted by M . The output of a
Turing transducer M on input w 2 \Sigma ? is denoted by M(w).
We consider deterministic and nondeterministic polynomial time
clocked Turing machines with uniformly attached standard
clocks which stop their computations in polynomial time (see [1]).
We impose some restrictions on our encoding of these machines.
From the code of any polynomial time clocked Turing machines we
can detect easily (in polynomial time) the natural k such that n k +k
is its polynomial time bound. Let D 1 , D 2 , D 3 , . and N 1 , N 2 , N 3 ,
. be, respectively, standard enumerations of all deterministic and
nondeterministic polynomial time clocked Turing machines.
Recall that the classes P, NP, co-NP are, respectively, the class
of all languages recognized by deterministic Turing machines working
in polynomial time, the class of all languages accepted by nondeterministic
Turing machines working in polynomial time and the class of
complements of all languages from NP. The symbol TAUT denotes
the set (of encodings) of all propositional tautologies over a fixed
adequate set of connectives, SAT denotes the set of all satisfiable
boolean formulas.
Finally, standard polynomial time computable
tupling function.
3 Propositional proof systems
The abstract notion of a propositional proof system was introduced
by S. Cook and R. Reckhow [4] in the following way:
Definition 1. A propositional proof system is a function f
\Gamma!
TAUT computable by a deterministic Turing machine in time bounded
by a polynomial in the length of the input.
A string w such that we call a proof of a formula ff.
A propositional proof system that allows short proofs to all tautologies
is called a polynomially bounded propositional proof system.
Definition 2. (Cook, Reckhow) A propositional proof system is polynomially
bounded if and only if there exists a polynomial p(n) such
that every tautology ff has a proof of length no more than p(jffj) in
this system.
The existence of a polynomially bounded propositional proof system
is equivalent to one of the most fundamental problems in complexity
theory.
Fact 1. (Cook, Reckhow) NP=co-NP if and only if there exists a
polynomially bounded propositional proof system.
S. Cook and R. Reckhow were the first to propose a program of
research aimed at attacking the NP versus co-NP problem by classifying
propositional proof systems by their relative efficiency and
then systematically studying more and more powerful concrete proof
systems (see [2]). A natural way for such a classification is to introduce
a partial order reflecting the relative strength of propositional
proof systems. It was done in two different manners.
Definition 3. (Cook, Reckhow) Propositional proof system P polynomially
simulates (p-simulates) propositional proof system Q if there
exists a polynomial time computable function f
that for every w, if w is a proof of ff in Q, then f(w) is a proof of
ff in P .
Definition 4. (Kraj'i-cek, Pudl'ak) Propositional proof system P simulates
propositional proof system Q if there exists a polynomial p such
that for every tautology ff, if ff has a proof of length n in Q, then ff
has a proof of length - p(n) in P .
Obviously p-simulation is a stronger notion than simulation. We
would like to pay attention to the fact that the simulation between
proof systems may be treated as a counterpart of the complexity-theoretic
notion of reducibility between problems. Analogously the
notion of a complete problem (a complete language) would correspond
to the notion of an optimal proof system. The notion of an
optimal propositional proof system was introduced by J. Kraj'i-cek
and P. Pudl'ak in [9] in two different versions.
Definition 5. A propositional proof system is optimal if it simulates
every other propositional proof system.
A propositional proof system is p-optimal if it p-simulates every other
propositional proof system.
The following open problem, posed by J. Kraj'i-cek and P. Pudl'ak
will be studied in our paper.
Open Problem:
(1) Does there exist an optimal propositional proof system?
(2) Does there exist a p-optimal propositional proof system?
The importance of these questions and their connection with the
NP versus co-NP problem is described by the following fact.
Fact 2. If an optimal (p-optimal) propositional proof system exists,
then NP=co-NP if and only if this system is polynomially bounded.
Complete languages for NP " co-NP and for UP
The classes NP " co-NP and UP are called promise classes because
they are defined using nondeterministic polynomial time clocked
Turing machines which obey special conditions (promises). The problem
whether a given nondeterministic polynomial time clocked Turing
machine indeed defines a language in any of these classes is undecidable
and because of this complete languages for these classes
are not known. Since there exist relativizations for which these two
classes have complete languages as well as relativizations for which
they do not the problems of the existence of complete languages for
seem to be very difficult.
It turns out that the existence of complete languages for these
classes depends on a certain structural condition on the set of machines
defining languages from these classes. Since this condition is
the chief motivation for our main theorems we survey known results
in this direction.
The class NP " co-NP is most often defined using complementary
pairs of nondeterministic Turing machines. We will use strong
nondeterministic Turing machines to define this class. A strong non-deterministic
Turing machine is one that has three possible out-
comes: "yes", "no" and "maybe". We say that such a machine accepts
a language L if the following is true: if x 2 L, then all computations
end up with "yes" or "maybe" and at least one with "yes",
if x 62 L, then all computations end up with "no" or "maybe" and
at least one with "no".
. is a standard enumeration of all nondeterministic
polynomial time clocked Turing machines then
strong nondeterministicg.
The following theorem links the question of the existence of a complete
language for NP " co-NP with the existence of a recursively
enumerable list of machines covering languages from NP " co-NP.
In [7] this list of machines is called a "nice" presentation of
Theorem 1. (Kowalczyk)
There exists a complete language for NP " co-NP if and only if
there exists a recursively enumerable list of strong nondeterministic
polynomial time clocked Turing machines N i 1
::: such that
This theorem can be exploited to obtain the following independence
result. Let T be any formal theory whose language contains
the language of arithmetic, i. e. the language f0,1, -, =, +, \Delta g. We
will not specify T in detail but only assume that T is sound (that is,
in T we can prove only true theorems) and the set of all theorems
of T is recursively enumerable.
Theorem 2. (Kowalczyk)
If NP " co-NP has no complete languages, then for any theory T
there exists L 2 NP " co-NP such that for no nondeterministic
polynomial time clocked N i with L(N i can it be proven in T
that N i is strong nondeterministic.
The class UP is closely related to a one-way function, the notion
central to public-key cryptography (see [13]). This class can be
defined using categorical (unambiguous) Turing machines. We call
a nondeterministic Turing machine categorical or unambiguous if it
has the following property: for any input x there is at most one
accepting computation. We define UP=fL(N i g.
As we can see from the following theorems the problem of the existence
of a complete language for UP is similar to its NP " co-NP
counterpart.
Theorem 3. (Hartmanis, Hemachandra)
There exists a complete language for UP if and only if there exists
a recursively enumerable list of categorical nondeterministic polynomial
time clocked Turing machines N i 1
::: such that fL(N i k
Theorem 4. (Hartmanis, Hemachandra)
If UP has no complete languages, then for any theory T there exists
such that for no nondeterministic polynomial time clocked
can it be proven in T that N i is categorical.
In Sections 6 and 7 we will show that the similarity between the
problems of the existence of complete languages for NP " co-NP
and for UP is also shared by the problem of the existence of an
optimal propositional proof system.
5 Formulas expressing the soundness of Turing machines
In this section we construct boolean formulas which will be used to
verify for a given deterministic polynomial time clocked transducer
M and integer n that M on any input of length n produces propositional
tautologies. We use these formulas in the proofs of Theorems
5 and 6.
For any transducer N we will denote by fN the function computed
by N (f
Definition 6. A Turing transducer N is called sound if fN maps
To any polynomial time clocked transducer M we will assign the
set AM =f Sound 1
M ,.g of propositional formulas
such
M is a propositional tautology if and only if for every
input of length n , the machine M outputs a propositional tautology.
So , for any polynomial time clocked transducer M , it holds: M
is sound if and only if AM ae TAUT .
Let N be a fixed nondeterministic Turing machine working in
polynomial time which accepts a string w if and only if w is not
a propositional tautology. For any fixed polynomial time clocked
transducer M , let us consider the set BM=fhM; 0 n i: There exists a
string x of length n such that M(x) 62 TAUT g. Using the machines
M and N we construct the nondeterministic Turing machine M 0
which guesses a string x of length n, runs M on input x and then
runs N on output produced by M .
works in polynomial time and accepts BM . Let FM;n
be Cook's Theorem formula (see [3]) for the machine M 0 and the
input hM; 0 n i. We define Sound n
M as :FM;n and then the formula
M is a tautology if and only if for every input of length n,
M outputs a tautology. From the structure of Cook's reduction (as
FM;n clearly displays M and n) it follows that for any fixed M , the
set AM is in P.
Moreover, the formulas describing the soundness of Turing machines
possess the following properties:
(1) Global uniformity property
There exists a polynomial time computable function f such that
for any polynomial time clocked transducer M with time bound
(2) Local uniformity property
Let M be any fixed polynomial time clocked transducer. There
exists a polynomial time computable function fM such that for
any w 2 \Sigma ?
6 Main results
A class of sets is recursively presentable if there exists an effective
enumeration of devices for recognizing all and only members
of this class ([10]). In this paper we use the notions of recursive P-
presentation and recursive NP-presentation which are mutations of
the notion of recursive presentability.
Definition 7. By an easy subset of TAUT we mean a set A such
that A ae TAUT and A 2 P ( A is polynomial time recognizable).
Definition 8. An optimal nondeterministic algorithm for TAUT is
a nondeterministic Turing machine M which accepts TAUT and
such that for every nondeterministic Turing machine M 0 which accepts
TAUT there exists a polynomial p such that for every tautology
ff
Let A be any easy subset of TAUT . We say that nondeterministic
polynomial time clocked Turing machine M names the set A if
A. Obviously A may possess many names. The following
theorem states that an optimal propositional proof system exists if
and only if there exists a recursively enumerable list of names for all
easy subsets of TAUT . We would like to pay attention to the similarity
between the next theorem and Theorems 1 and 3 from Section
4.
Theorem 5. Statements (i) - (iii) are equivalent.
(i) There exists an optimal propositional proof system.
(ii) There exists an optimal nondeterministic algorithm for TAUT .
(iii) The class of all easy subsets of TAUT possesses a recursive NP-
presentation.
By the statement (iii) we mean: there exists a recursively enumerable
list of nondeterministic polynomial time clocked Turing machines
::: such that
(2) For every A ae TAUT such that A 2 P there exists j such that
Proof. (i)
With every propositional proof system we can associate a non-deterministic
"guess and verify" algorithm for TAUT . On an input
ff this algorithm guesses a string w and then checks in polynomial
time whether w is a proof of ff. If successful, the algorithm halts in
an accepting state.
Symmetrically any nondeterministic algorithm for TAUT can be
transformed to a propositional proof system. The proof of a formula
ff in this system is a computation of M accepting ff.
Let Opt denote an optimal propositional proof system and let M
denote a nondeterministic Turing machine associated with Opt (a
"guess and verify" algorithm associated with Opt). It can be easily
checked that M accepts TAUT and for any nondeterministic Turing
machine M 0 accepting TAUT there exists a polynomial p such that
for every tautology ff it holds:
Let M be an optimal nondeterministic algorithm for TAUT. A
recursive NP-presentation of all easy subsets of TAUT we will define
in two steps. In the first step we define a recursively enumerable list
of nondeterministic Turing machines F 1 , F 2 , F 3 ,. The machine F k is
obtained by attaching the shut-off clock n k +k to the machine M . On
any input w, the machine F k accepts w if and only if M accepts w in
no more than jwj. The sequence F 1 , F 2 , F 3 ,
F 4 , . of nondeterministic Turing machines possesses the properties
(1) and (2):
(1) For every i it holds
(2) For every A which is an easy subset of TAUT there exists j such
that A ae L(F j )
In the second step we define the new recursively enumerable list
of nondeterministic polynomial time clocked Turing machines K 1 ,
as the machine which simulates
steps of F i and steps of N j (see Section 2 for definition
of N j ) and accepts w if and only if both F i and N j accept w.
Let A be any fixed easy subset of TAUT . There exist k and m
such that A = L(N k ) and A ae L(Fm ). It follows from the definition
of the sequence K 1 , K 2 , K 3 , . that A is accepted by the machine
provides a recursive NP-presentation of
all easy subsets of TAUT .
Let G be the machine generating the codes of the machines from
the sequence N i 1
,. forming a recursive NP-presentation
of all easy subsets of TAUT . We say that a string v 2 \Sigma ? is in good
form if
G; Comp \Gamma Sound jwj
where:
M is a polynomial time clocked Turing transducer with
time bound,
G is a computation of the machine G. This computation
produces a code of a certain machine N i j
M is a computation of the machine N i j
accepting
the formula Sound jwj
is the sequence of zeros (padding).
We call a Turing transducer n-sound if and only if on any input
of length n it produces a propositional tautology.
Let us notice, that if v is in good form then Sound jwj
M as a formula
accepted by a certain machine from NP-presentation is a propositional
tautology. This clearly forces M to be n-sound, where
so M on input w produces a propositional tautology.
Let ff 0 be a certain fixed propositional tautology. We define
in the following way: is in good
G; Comp \Gamma Sound jwj
and ff is a propositional tautology produced by M on input w, otherwise
\Gamma! TAUT .
In order to prove that Opt is polynomial time computable it is
sufficient to notice that using global uniformity property we can
check in polynomial time whether v is in good form. Hence Opt is a
propositional proof system.
It remains to prove that Opt simulates any propositional proof
system. Let h be a propositional proof system computed by the
polynomial time clocked transducer K with time bound n l
the set AK= fSound 1
K , .g is an easy subset of
TAUT , there exists the machine N i j
from the NP-presentation such
that
Let ff be any propositional tautology and let x be its proof in h.
Then ff possesses a proof in Opt of the form:
G; Comp \Gamma Sound jxj
The word Comp \Gamma G is the computation of G producing the code of
K is a computation of N i j
accepting Sound jxj
K .
Let us notice that jComp \Gamma is a constant. Because
is polynomial time clocked there exists a polynomial p such that
p(jxj). The constants c 1 , l and the polynomial
depend only on N i j
which is fixed and connected with K. This
proves that Opt simulates h.
The following definition is a nondeterministic counterpart of Definition
7.
Definition 9. By an NP-easy subset of TAUT we mean a set A
such that A ae TAUT and A 2 NP.
A slight change in the previous proof shows that also the second
version of Theorem 5 is valid. In this version condition (iii) is replaced
by the following one:
(iv) The class of all NP-easy subsets of TAUT possesses a recursive
NP-presentation.
Now we will translate the previous result to the deterministic
case.
Definition 10. An almost optimal deterministic algorithm for TAUT
is a deterministic Turing machine M which accepts TAUT and such
that for every deterministic Turing machine M 0 which accepts TAUT
there exists a polynomial p such, that for every tautology ff
We name such an algorithm as an almost optimal deterministic
algorithm for TAUT because the optimality property is stated for
any input string x which belongs to TAUT and nothing is claimed for
other x's (compare the definition of an optimal acceptor for TAUT
in [11]).
The equivalence (i) $ (ii) in the next theorem is restated from
[9] in order to emphasize the symmetry between Theorem 5 and
Theorem 6.
Theorem 6. Statements (i) - (iii) are equivalent.
(i) There exists a p-optimal propositional proof system.
(ii) There exists an almost optimal deterministic algorithm for TAUT .
(iii) The class of all easy subsets of TAUT possesses a recursive P-
presentation.
By the statement (iii) we mean: there exists a recursively enumerable
list of deterministic polynomial time clocked Turing machines
::: such that
(2) For every A ae TAUT such that A 2 P there exists j such that
Proof. (i)
This follows by the same arguments as in the proof of (ii) ! (iii)
from Theorem 5. The only change is the use of deterministic Turing
machines instead of the nondeterministic ones.
A string v 2 \Sigma ? is in good form if
G; Comp \Gamma Sound jwj
where the appropriate symbols mean the same as before. We define
analogously as in the proof of Theorem 5:
is in good form
G; Comp \Gamma Sound jwj
and ff is a propositional tautology produced by M on input w, otherwise
is a certain fixed propositional tautology.
It remains to prove that Opt p-simulates any propositional proof
system. Let h be a propositional proof system computed by a polynomial
time clocked transducer K with time bound n l
the set AK= fSound 1
K , .g is an easy subset of
TAUT , there exists the machine D i j
from the P-presentation such
that
). The function
G; Comp \Gamma Sound jxj
translates proofs in h into proofs in Opt. The word Comp \Gamma G in the
definition of t is the computation of G producing the code of D i j
K is a computation of D i j
accepting Sound jxj
K .
From the fact that D i j
is deterministic and works in polynomial
time and from local uniformity property (see Section
that Comp \Gamma Sound jxj
K can be constructed in polynomial time. This
proves that t is polynomial time computable.
Definition 11. A Turing machine acceptor M is called sound if
ae TAUT .
The question, whether the set of all sound deterministic (non-
deterministic) polynomial time clocked Turing machines yields the
desired P-presentation (NP-presentation) (that is, whether this set
is recursively enumerable) occurs naturally in connection with Theorems
5 and 6. The negative answer to this question is provided by
the next theorem.
Theorem 7. The set of all sound deterministic (nondeterministic)
polynomial time clocked Turing acceptors is not recursively enumerable
This follows immediately from Rice's Theorem (see [14]).
7 Independence results
Let T be any formal theory satisfying the assumptions from Section
4. The notation T ' fi means that a first order formula fi is provable
in T .
Let M be a Turing machine. By "L(M) ae TAUT " we denote
the first order formula which expresses the soundness of M , i.e.
8w2L(M) [w is a propositional tautology]
Definition 12. A deterministic (nondeterministic) Turing machine
Definition 13. A set A ae TAUT is T -provably NP-easy if there
exists a nondeterministic polynomial time clocked Turing machine
fulfilling (1) - (2)
(1) M is
As in the case of the classes NP " co-NP and for UP we can
obtain the following independence result.
Theorem 8. If there does not exist an optimal propositional proof
system, then for every theory T there exists an easy subset of TAUT
which is not T -provably NP-easy.
Proof. Suppose, on the contrary, that there exists a theory T such
that all easy subsets of TAUT are T -provably NP-easy. Then the
following recursively enumerable set of
machines\Omega is a
nondeterministic polynomial time clocked Turing machine which is
creates a recursive NP-presentation of the class of all
easy subsets of TAUT . By Theorem 5, this implies that there exists
an optimal propositional proof system, giving a contradiction.
The following result can be obtained from the second version of
Theorem 5.
Theorem 9. If there does not exist an optimal propositional proof
system, then for every theory T there exists an NP-easy subset of
TAUT which is not T -provably NP-easy.
The translation of this result to the deterministic case goes along
the following lines.
Definition 14. A set A ae TAUT is T -provably easy if there exists
a deterministic polynomial time clocked Turing machine M fulfilling
(1) M is
Theorem 10. If there does not exist a p-optimal propositional proof
system, then for every theory T there exists an easy subset of TAUT
which is not T -provably easy.
8 Conclusion
In this paper we related the question of the existence of an optimal
propositional proof system to the recursive presentability of the
family of all easy subsets of TAUT by means of polynomial time
clocked Turing machines . The problems of the existence of complete
languages for the classes NP " co-NP and for UP have a
similar characterization. From this characterization a variety of interesting
results about the promise classes NP " co-NP and UP
were derived by recursion-theoretic techniques (see [7], [5]). Although
recursion-theoretic methods seem unable to solve the problem of the
existence of an optimal propositional proof system we believe that
our main results from Section 6 allow the application of these methods
(as it was in case of promise classes, see [5], [6]) to further study
of this problem.
--R
Structural Complexity I (Springer-Verlag
Lectures on Proof Theory.
The complexity of theorem proving procedures
The relative efficiency of propositional proof systems
Complexity classes without machines: On complete languages for UP
On complete problems for NP
Some connections between presentability of complexity classes and the power of formal systems of reasoning
Complete Problems for Promise Classes by Optimal Proof Systems for Test Sets
Propositional proof systems
On the structure of sets in NP and other complexity classes
On optimal algoritms and optimal proof systems
Optimal proof systems for Propositional Logic and complete sets
Computational Complexity
Classes of recursively enumerable sets and their decision problems
On an optimal quantified propositional proof system and a complete language for NP
--TR
Complexity classes without machines: on complete languages for UP
Some Connections between Representability of Complexity Classes and the Power of Formal Systems of Reasoning
On Complete Problems for NP$\cap$CoNP
Optimal Proof Systems for Propositional Logic and Complete Sets
On an Optimal Quantified Propositional Proof System and a Complete Language for NP cap co-NP
Complete Problems for Promise Classes by Optimal Proof Systems for Test Sets
The complexity of theorem-proving procedures
--CTR
Christian Glaer , Alan L. Selman , Samik Sengupta, Reductions between disjoint NP-pairs, Information and Computation, v.200 n.2, p.247-267, 1 August 2005 | complexity of computation;complexity classes;complexity of proofs;classical propositional logic |
607032 | Concepts and realization of a diagram editor generator based on hypergraph transformation. | Diagram editors which are tailored to a specific diagram language typically support either syntax-directed editing or free-hand editing, i.e., the user is either restricted to a collection of predefined editing operations, or he is not restricted at all, but misses the convenience of such complex editing operations. This paper describes DIAGEN, a rapid prototyping tool for creating diagram editors which support both modes in order to get their combined advantages. Created editors use hypergraphs as an internal diagram model and hypergraph parsers for syntactic analysis whereas syntax-directed editing is realized by programmed hypergraph transformation of these internal hypergraphs. This approach has proven to be powerful and general in the sense that it supports quick prototyping of diagram editors and does not restrict the class of diagram languages which it can be applied to. | Introduction
Diagram editors are graphical editors which are tailored to a specific diagram
language; they can be distinguished from pure drawing tools by their capability
of "understanding" edited diagrams to some extent. Furthermore, diagram
editors do not allow to create arbitrary drawings, but are restricted to visual
components which occur in the diagram language. For instance, an editor for
UML class diagrams typically does not allow to draw a transistor symbol
Preprint submitted to Elsevier Science 27 March 2001
which would be possible in a circuit diagram editor. Current diagram editors
support either syntax-directed editing or free-hand editing.
Syntax-directed editors provide a set of editing operations. Each of these operations
is geared to modify the meaning of the diagram. This editing mode
requires an internal diagram model that is primarily modified by the opera-
tions; diagrams are then updated according to their modified model. These
models are most commonly described by some kind of graph; editing operations
are then represented by graph transformations (e.g. [1,2]).
Diagram editors providing free-hand editing are low-level graphics editors
which allow the user to directly manipulate the diagram. The graphics editor
becomes a diagram editor by o#ering only pictorial objects which are used by
the visual language and by combining it with a parser. A parser is necessary
for checking the correctness of diagrams and analyzing the syntactic structure
of the diagram. There are grammar formalisms and parsers that do not require
an internal diagram model as an intermediate diagram representation,
but operate directly on the diagram (e.g., constraint multiset grammars [3]).
Other approaches use an internal model which is analyzed by the parser (e.g.,
VisPro [4]). Again, graphs are the most common means for describing such a
model.
The advantage of free-hand editing over syntax-directed editing is that a diagram
language can be defined by a concise (graph) grammar editing
operations can be omitted. The editor does not force the user to edit diagrams
in a certain way since there is no restriction to predefined editing operations.
However, this may turn out to be a disadvantage since editors permit to create
any diagram; they do not o#er explicit guidance to the user. Furthermore, free-hand
editing requires a parser and is thus restricted to diagrams and (graph)
grammars which o#er e#cient parsers.
editors either support syntax-directed editing or free-hand
editing. An editor that supports both editing modes at the same time would
combine the positive aspects of both editing modes and reduce their negative
ones. Despite this observation, there is only one such proposal which has not
yet been realized known to us: Rekers and Schurr propose to use two kinds of
graphs as internal representations of diagrams [5]: the spatial relationship graph
(SRG) abstracts from the physical diagram layout and represents higher level
spatial relations. Additionally, an abstract syntax graph (ASG) that represents
the logical structure of the diagram is kept up-to-date with the SRG. Context-sensitive
graph grammars are used to define the syntax of both graphs. Free-hand
editing of diagrams is planned to modify the first graph, syntax-directed
editing is going to modify the second. In each case, the other graph is modified
accordingly. Therefore, a kind of diagram semantics is available by the ASG.
However, this approach requires almost a one-to-one relationship between SRG
and ASG. This is not required in the approach of this paper. We will come
back to this approach in the conclusions (cf. Section 6).
This paper describes DiaGen, a rapid-prototyping tool for creating diagram
editors that support both editing modes at the same time. DiaGen (Diagram
editor Generator) supports free-hand editing based on an internal hypergraph
model which is parsed according to some hypergraph grammar. Attribute evaluation
which is directed by the syntactic structure of the diagram is then used
for creating a user-specified semantic representation of the diagram. This free-hand
editing mode is seamlessly extended by a syntax-directed editing mode,
which also requires an automatic layout mechanism for diagrams. Support for
automatic diagram layout which is used for both syntax-directed editing and
free-hand editing is briefly outlined, too.
The next section gives an overview of the DiaGen tool and the common architecture
of editors being created with DiaGen. Section 3 then explains the
free-hand editing mode of these editors and the diagram analysis steps which
are necessary for translating freely edited diagrams into some semantic rep-
resentation. The integration of additional syntax-directed editing operations
into such editors is explained in Section 4. An automatic layout mechanism,
which is required by syntax-directed editing, is outlined in Section 5. Section 6
concludes the paper.
DiaGen provides an environment for rapidly developing diagram editors. This
section first outlines this environment and how it is used for creating a diagram
editor that is tailored to a specific diagram language. Each of such DiaGen
editors is based on the same editor architecture which is adjusted to the specific
diagram language. This architecture is described afterwards.
2.1 The DiaGen environment
DiaGen is completely implemented in Java and consists of an editor frame-work
and a program generator. DiaGen is free software and can be down-loaded
from the DiaGen web site [6].
Fig. 1 shows the structure of DiaGen and the process of using it as a rapid-prototyping
tool for developing diagram editors. The framework, as a collection
of Java classes, provides the generic editor functionality which is necessary
for editing and analyzing diagrams. In order to create an editor for a specific
DiaGen
editor
framework
generator
Program
Specification
code
Generated
program
code
program
specific
Editor
IAGEN
Editor developer
Diagram editor
Fig. 1. Generating diagram editors with DiaGen.
diagram language, the editor developer primarily has to supply a specifica-
tion, which textually describes syntax and semantics of the diagram language.
Additional program code which is written "manually" can be supplied, too.
Manual programming is necessary for the visual representation of diagram
components on the screen and for processing specific data structures of the
problem domain, e.g., for semantic processing when using the editor as a component
of another software system. The specification is then translated into
Java classes by the program generator.
The generated classes, together with the editor framework and the manually
written code, implement an editor for the specified diagram language. This
editor can run as a stand-alone program. But it can also be used as a software
component since the editor framework as well as the generated program code
is conformable with the JavaBeans standard, the software component model
for Java. Common integrated development environments (IDEs, e.g., JBuilder
by Imprise/Borland, VisualCafe by Symantec or Visual Age for Java by IBM.)
can be used to visually plug in generated editors into other software systems
without much programming e#ort.
Diagram editors which have been developed using DiaGen (such editors are
called "DiaGen editors" in the following) provide the following features:
. DiaGen editors always support free-hand editing. The editor framework
contains a generic drawing tool which is adjusted to the specified diagram
language by the program generator. The visual representation of diagram
components which are used by the drawing tool has to be supplied by the
editor developer. The editor framework provides an extensive class library
for that purpose. Diagrams that are drawn using the drawing tool are internally
modeled by hypergraphs which are analyzed primarily by a hypergraph
parser (cf. Section 3). The hypergraph grammar which is used by the hyper-graph
parser is the core of the diagram language specification. The analysis
results are used to provide user feedback on diagram parts which are not
correct with respect to the diagram language.
. Diagrams which are created using a DiaGen editor are translated into a
semantic representation. This process is driven by the syntactic analysis and
makes use of program code and data structures which are provided as "ed-
itor specific program code" in Fig. 1. The reverse translation, i.e., creating
diagrams from external representations, is also supported by a mechanism
that is similar the one of syntax-directed editing operations.
. DiaGen editors optionally support syntax-directed editing, too, if the editor
developer has specified syntax-directed editing operations. These operations
are primarily hypergraph transformations which modify the internal
hypergraph model of edited diagrams (cf. Section 4).
DiaGen editors can be specified and developed in a rapid prototyping
fashion without any syntax-directed editing operation. Any diagram of the
diagram language can be created by free-hand editing only. Desirable editing
operations can be added later.
. Automatic layout is an optional DiaGen editor feature, too, but which is
obligatory when specifying syntax-directed operations. The automatic lay-out
mechanism adjusts the diagram layout after applying syntax-directed
editing operations which have modified the internal diagram model. Automatic
layout also assists free-hand editing: After each layout modification
by the user, the layout mechanism changes the diagram such that the
structure of the diagram remains unchanged. DiaGen o#ers constraints for
specifying the layout mechanism in a declarative way (cf. Section 5), or a
programming interface for plugging in other layout mechanisms. DiaGen
comes with some general layouting mechanisms like a force-driven layout
and simple constraint propagation methods which can be parameterized by
the editor developer.
The rest of this paper presents the concepts and realization of these features
by means of a formal specification based on hypergraph transformation and
generating the editor using such a specification. Each of these editors has the
same architecture which is considered next.
2.2 The DiaGen editor architecture
Fig. 2 shows the structure which is common to all DiaGen editors and which
is described in the following paragraphs. Ovals are data structures, and rectangles
represent functional components. Gray rectangles are parts of the editor
framework which have been adjusted by the DiaGen program generator based
on the specification of the specific diagram language. Flow of information is
represented by arrows. If not labeled, information flow means reading resp.
creating the corresponding data structures.
The editor supports free-hand editing by means of the included drawing tool
which is part of the editor framework, but which has been adjusted by the
operations
selects
selects
operations
adds/removes
components
modifies
reads
reads
reads
modifies
reads
marks syntactically correct subdiagrams
modifies
gathers
gathers
gathers
Layouter
Diagram
information
Derivation
structure
Reduced
hypergraph
model
Drawing
tool
Hypergraph
transformer
Modeler Reducer Parser
semantic
representation
Attribute
evaluation
Hypergraph
model
Fig. 2. Architecture of a diagram editor based on DiaGen.
program generator. With this drawing tool, the editor user can create, arrange
and modify diagram components which are specific to the diagram language.
Editor specific program code which has been supplied by the editor developer
is responsible for the visual representation of these language specific compo-
nents. Examples are rectangular text boxes or diamond-shaped conditions in
flowcharts. Fig. 3 shows a screenshot of such an editor whose visual appearance
is characterized by its drawing tool. When components are selected, so-called
handles - like in conventional drawing tools - show up which allow to move or
modify single or grouped diagram components like with common o#-the-shelf
drawing tools (cf. Fig. 9a). The drawing tool creates the data structure of
the diagram as a set of diagram components together with their attributes
(position, size, etc.
The sequence of processing steps which starts with the modeler and ends with
attribute evaluation (cf. Fig. 2) realizes diagram analysis which is necessary for
free-hand editing: The modeler first transforms the diagram into an internal
model, the hypergraph model. The task of analyzing this hypergraph model is
quite similar to familiar compiler techniques: The reducer - which corresponds
to the scanner of a compiler - performs some kind of lexical analysis and
creates a reduced hypergraph model which is then syntactically analyzed by the
hypergraph parser. This processing step identifies maximal parts of diagram
which are (syntactically) correct and provides visual feedback to the user by
coloring each subdiagram with a di#erent color. A correct diagram is thus
entirely colored with just a single color, and errors are indicated by missing
colors. Driven by the syntactic structure of each subdiagram and similar to
the semantic analysis step of compilers, attribute evaluation is then used to
create a semantic representation for each of these subdiagrams.
Fig. 3. Screenshot of a diagram editor for flowcharts.
The layouter modifies attributes of diagram components and thus the diagram
layout by using information which has been gathered by the reducer and the
parser or by attribute evaluation (cf. Section 5). The layouter is necessary for
realizing syntax-directed editing: Syntax-directed editing operations modify
the hypergraph model by means of the hypergraph transformer and add or
remove components to resp. from the diagram. The visual representation of
the diagram and its layout is then computed by the layouter.
These processing steps, which have been outlined referring to Fig. 2, are described
in more detail in the following sections.
3 Free-Hand Editing
This section describes the processing steps of a DiaGen editor which are used
for free-hand editing and which are shown in Fig. 2. DiaGen has been used for
creating editors for many diagram languages (e.g., UML diagrams, ladder di-
agrams, Petri nets). As a sample diagram language, this paper uses flowcharts
although it is an admittedly simple language. However, other languages are
less suited for presentation in a paper.
3.1 The hypergraph model
Each diagram consists of a finite set of diagram components, each of which is
determined by its attributes. for flowcharts, there are rectangular text boxes
and diamond-shaped conditions whose positions are defined by their x and y
coordinates and their size by a width and a height attribute. Vertical as well as
horizontal lines and arrows have x and y coordinates of their starting and end
points on the canvas. However, attributes describe an arrangement of diagram
components only in terms of numbers. The meaning of a diagram is determined
by the diagram components and their spatial arrangement. The specific arrangement
of flowchart components is made up of boxes and diamonds which
are connected by arrows and lines in a very specific way. Arrangements can
always be described by spatial relationships between diagram components.
for that purpose, each diagram component typically has several distinct attachment
areas at which it can be connected to other diagram components.
A flowchart diamond, e.g., has its top vertex as well as its left and right one
where it can be connected to lines and arrows, whereas lines and arrows have
their end points as well as their line (please note that arrows can be connected
to the middle of another arrow as shown in Fig. 3) as attachment areas. Connections
can be established by spatially related (e.g., overlapping) attachment
areas as with flowcharts where an arrow has to end at an exact position in
order to be connected to a diamond.
DiaGen uses hypergraphs to describe a diagram as a set of diagram components
and the relationships between attachment areas of "connected" com-
ponents. Hypergraphs consist of two finite sets of nodes and hyperedges (or
simply edges for short). Each hyperedge carries a type and is connected to an
ordered sequence of nodes. The sequence has a certain length which is called
arity of the hyperedge and which is determined by the type of the edge. Each
node of this sequence is called "visited" by the hyperedge. Familiar directed
edge-labeled graphs are special hypergraphs where each hyperedge has arity 2.
Hypergraphs are an obvious means for modeling diagrams: Each diagram component
is modeled by a hyperedge. The kind of diagram component is the hyperedge
type, the number of attachment areas is its arity. Attachment areas
are modeled by nodes which are visited by the hyperedge. The sequence of
visited nodes determines which attachment area is modeled by which node.
The set of diagram components is thus represented by a set of nodes and a
set of hyperedges where each node is visited by exactly one hyperedge. Relationships
between attachment areas are modeled by hyperedges of arity 2.
They carry a type which describes the kind of relationship between related
attachment areas.
Fig. 4 shows the hypergraph model of a subdiagram of the one shown in Fig. 3.
Nodes are depicted by black dots. Component edges which represent diagram
components are shown as gray rectangles that are connected to visited nodes
by thin lines. Line numbers represent the sequence of visited nodes. Relation
edges which represent relationships between attachment areas are depicted
as arrows between connected nodes. The arrow direction indicates the node
sequence. Fig. 4 shows the hypergraph in a similar way as the represented sub-
diagram. Rectangular boxes and diamond-shaped conditions are represented
by box edges resp. cond edges with arity 2 resp. 3. Vertical and horizontal
arrows resp. lines are shown as vArrow, hArrow, vLine, and hLine edges, resp.
cond hLine
hLine
flowIn
flowOut flowOut
vArrow
hArrow
join
box
flowIn
flowOut
vArrow
Fig. 4. A part of the flowchart which is shown in Fig. 3 and its corresponding
hypergraph model.
Relationship edge types are flowIn, flowOut, and join. The relationship of a
vertical arrow which ends at the upper attachment area of a box or a diamond
is represented by a flowIn relation between the "end node" of the arrow and
the "upper node" of the corresponding vArrow and box edges. A flowOut relationship
is used in a similar way for leaving arrows. A join relation connects
an arrow end with lines or arrows.
Hypergraph models are created by the modeler of DiaGen editors: The modeler
first creates component edges for each diagram component and nodes for
each of their attachment areas. Afterwards, the modeler checks for each pair
of attachment areas whether they are related as defined in the specification. 2
The language specification describes such relationships in terms of relations on
attribute values of corresponding attachment areas. E.g., in the flowchart ex-
ample, the end attachment area of a vertical arrow and the upper attachment
area of a rectangular box are flowIn-related if both attachment areas overlap,
i.e., have close positions on the canvas. for each relationship which is detected,
the modeler adds a corresponding relation edge between corresponding nodes.
3.2 The reduced hypergraph model
Hypergraph models tend to be quite large even for small diagrams. for in-
stance, Fig. 4 shows only a small portion of the hypergraph model of the
really small flowchart of Fig. 3. The hypergraph model represents each diagram
component and each relationship between them directly. The structure
and meaning of a diagram, however, is generally represented in terms of larger
groups of components and their relationship. for flowcharts, e.g., the crucial
2 for e#ciency reasons, only pairs of attachment areas with overlapping bounding
boxes are actually considered.
a=b=c a=b=c
a=b
a
a
a
vArrow
c
3 flowIn
vArrow
cconn conn
statement2
box vArrow
continue connect
flowOut2a
beginaFig. 5. Some reduction rules for flowcharts.
information is contained by the set of boxes and conditions which are inter-connected
by lines and arrows. The specific path of lines and arrows between
connected boxes is irrelevant. DiaGen editors therefore do not analyze the
hypergraph model directly, but first identify such groups of components and
relationships. Similar to common compiler techniques where lexical analysis
is used to group input stream characters to tokens (e.g., identifiers and key-
words) and leaving other characters unconsidered (e.g., comments), the reducer
searches for all matches of specified patterns and creates a reduced hypergraph
model which then represents the diagram structure directly.
Similar to compiler generators which require a specification of lexical analy-
sis, the reducer has to be specified for a specific diagram language. DiaGen
provides reduction rules to this end: Each rule consists of a pair (P, R) of
hypergraphs and additional application conditions. P is the pattern whose occurrences
are searched in the hypergraph model. The hypergraph R ("result")
describes a modification to the reduced hypergraph for each match of P which
also satisfies the application conditions.
Fig. 5 shows five reduction rules for flowcharts in the form P # R. The pattern
of the rightmost rule actually consists of the vArrow edge with its three visited
nodes only. The gray, crossed out sub-hypergraphs are negative application
conditions: A match for the vArrow edge is used for rule application if and
only if none of the three crossed out sub-hypergraphs can be matched as well,
i.e., the match is valid if there is no additional flowIn, continue, or connect
edge which is connected to the start node of the vArrow edge (continue edges
are not further considered here). The hypergraph R of each rule shows the
hypergraph which is added to the reduced hypergraph model for each valid
match of the P -hypergraph. Same node labels indicate corresponding nodes of
the hypergraph model and the reduced one. Hypergraph model nodes which lie
in di#erent pattern occurrences (not necessarily of di#erent patterns) always
correspond to the same node of the reduced model. Three special cases have
to be mentioned here:
conn
conn
conn
conn
conn
conn
conn
conn
conn conn
conn
conn
conn
conn
conn
conn conn
conn
conn
statement
f
d
c
e
a
statement
statement
statement
condition
condition
statement
statement
begin
Fig. 6. The reduced hypergraph model of the flowchart of Fig. 3.
. Nodes which are matched by no P -hypergraph of any rule do not have
corresponding nodes in the reduced model.
. If there are nodes which lie in di#erent pattern occurrences where none of
these pattern nodes has a corresponding node in its R graph, these nodes
do not have corresponding nodes in the reduced model.
. Two or more P -nodes may correspond to a single R-node (e.g., a
the second and fourth rule). All the nodes of the hypergraph model which
match these "identified" P -nodes correspond to a single node of the reduced
hypergraph model.
Fig. 6 shows the reduced hypergraph model of the flowchart of Fig. 3 and which
is created by these reduction rules. The structure of this model is similar to
the structure of the hypergraph model. Because of the reduction rules which
identify nodes, a much cleaner hypergraph model is created. The conn edges
are grayed out since they are actually not needed for the following syntactic
analysis; the corresponding reduction rules could be omitted for pure free-hand
editing editors. Section 4 however shows why they are needed in the context
of syntax-directed editing operations.
The concept of reduction rules is similar to hypergraph transformation rules
L ::= R (or L # R) with L (left-hand side, LHS) and R (right-hand side,
being hypergraphs [7,8]. A transformation rule L ::= R is applied to a
hypergraph H by finding L as a subgraph of H and replacing this match
by R obtaining hypergraph H # . We say, H # is derived from H in one (deriva-
step. A derivation sequence is a sequence of derivation steps where the
resulting hypergraph of each step is immediately derived in the next step. The
following observations show that specifying the reducer and the reducing process
for a specific diagram language would be rather di#cult if the reducer had
been defined in terms of such derivation sequences from hypergraph models to
reduced ones. Instead, the reducer applies all reduction rules to all occurrences
of their left-hand sides in some kind of parallel fashion:
. Patterns frequently overlap. This is so since the meaning of a group of
diagram components and relationships - and this "meaning" is tried to be
represented by the edges of the reduced hypergraph model - often depends
on context which is part of another group. E.g., the last rule of Fig. 5 uses a
flowOut edge as (negative) context which also occurs in the pattern of the
third rule. Applying one rule would change the context of the other one if
regular hypergraph transformations were used. It would be a di#cult task
to specify the desired reducing semantics.
. There are in general many di#erent derivation sequences starting at a specific
hypergraph which would produce di#erent reduced hypergraphs because
of these overlapping patterns. The editor developer had to take measures
to avoid this nondeterminism. However, it is a nontrivial task to set
up such confluent sets of transformations [9].
Instead, reduction rules are applied as follows: All possible matches of all rule
patterns are searched first without changing the hypergraph. But only those
matches are selected which satisfy the corresponding application conditions. In
a second step, the corresponding result hypergraphs are instantiated in parallel
for each valid match of the corresponding pattern. All these hypergraphs are
connected by common nodes according to the correspondence between nodes
of the hypergraph model and the reduced one. 3
The reduced hypergraph model now directly represents the structure of the
diagram which is syntactically analyzed by the parser.
3.3 Parsing
The syntactic structure of a diagram is described in terms of its reduced hyper-graph
model, i.e., a diagram language corresponds to a class of hypergraphs.
In the literature, there exist two main approaches for specifying graph or hypergraph
classes. The first one uses a graph schema which is a kind of Entity-Relationship
diagram that describes how edges and nodes of certain types may
interconnect (e.g., EER [11]). The other one uses some kind of graph or hyper-
3 In a formal treatment, each reduction rule (P, R) represents a hypergraph morphism
P#R is the union of the pattern with the result hypergraph.
Corresponding nodes of P and R as well as identified nodes of R are identified in
P#R. The reduced hypergraph model is computed by first creating the colimes of all
match morphisms of the di#erent patterns into the hypergraph model together with
all these morphisms P # P # R, and then removing the edges and "unnecessary"
nodes of the hypergraph model from the colimes hypergraph [10].
graph grammar (e.g. [12]) which generalizes the idea of Chomsky grammars
for strings which are also used by standard compiler generators [13]. Because
of the similarity of diagram analysis with program analysis being performed by
compilers and the availability of derivation trees and directed acyclic graphs
(DAGs, see below) which easily allow to represent the syntactic structure of
a diagram, DiaGen uses a hypergraph grammar approach for specifying the
class of reduced hypergraph models of the diagram language.
As already mentioned, hypergraph grammars are similar to string grammars.
Each hypergraph grammar consists of two finite sets of terminal and nonterminal
hyperedge labels and a starting hypergraph which contains nonterminally
labeled hyperedges only. Syntax is described by a set of hypergraph transformation
rules which are called productions in this context. The hypergraph
class or language of the grammar is defined by the set of terminally labeled
hypergraphs which can be derived from the starting hypergraph in a finite
derivation sequence.
There are di#erent types of hypergraph grammars which impose restrictions
on the LHS and RHS of each production as well as the allowed sequence of
derivation steps. Context-free hypergraph grammars are the simplest ones:
each LHS has to consist of a single nonterminally labeled hyperedge together
with the appropriate number of nodes. Application of such a production removes
the LHS hyperedge and replaces it by the RHS. Matching node labels
of LHS and RHS determine how the RHS has to fit in after removing the
LHS hyperedge. The productions of Fig. 7 are context-free ones. Productions
L . with the same LHS are drawn as L ::= R 1 |R 2 | .
Actually, Fig. 7 shows the productions of a hypergraph grammar whose language
is just the set of all reduced hypergraph models of structured flowcharts,
i.e., flowcharts whose blocks have a single entry and a single exit only. The
types statement, condition, and conn are terminal hyperedge labels being
used in reduced hypergraph models. The set of nonterminal labels consists
of Flowchart, BlockSeq, Block, and Conn. Flowchart edges do not connect to
any node (arity 0). The starting hypergraph consists of just a single Flowchart
edge. Again, conn edges, and now Conn edges, too, are grayed out since they
are actually not required for free-hand editing, but for syntax-directed editing
(cf. Section 4).
Context-free hypergraph grammars can describe only very limited hypergraph
languages [12,14] and, therefore, are not suited for specifying the syntax of
many diagram languages. 4 Context-free hypergraph grammars with embeddings
are more expressive than context-free ones. They additionally allow
4 Actually, the only diagram languages that we know about and which can be
described by context-free grammars are Nassi-Shneiderman diagrams [15], syntax
diagrams [16] and flowcharts as used in this paper.
a
a
a
a
a
a
a
a
a
a
a
a
BlockSeq Block Conn
Block
Flowchart
begin
Block
condition
Conn
Conn
statement
condition
Conn
condition
Conn
condition
Conn
condition
Conn
condition
Conn
condition
Conn
Conn conn Conn conn
Fig. 7. Productions of a grammar for the reduced hypergraph models of flowcharts.
embedding productions L ::= R where the RHS R extends the LHS L # R
by some edges and nodes, which are "embedded" into the context provided
by the LHS when applying such a production. This very limited treatment of
context has been chosen since it has proven su#cient for all diagram languages
which have been treated with DiaGen, but still allows for e#cient parsing;
context-free hypergraph grammars with embeddings even appear to be suitable
for all possible kinds of diagram languages. 5 Parsing algorithms and a
more detailed description of both grammar types can be found in [19,17,10].
The most prominent feature of the parsing algorithms being used in DiaGen
editors is their capability of dealing with diagram errors: Erroneous diagrams
resp. their reduced hypergraph models are not just rejected. Instead, maximal
subdiagrams resp. sub-hypergraphs are identified which are correct with respect
to the hypergraph grammar. Feedback about these correct subdiagrams
is provided to the user by drawing all diagram components with the same
color whose representing edges belong to the same correct sub-hypergraph.
The result of this step of diagram analysis is the derivation structure of the
reduced hypergraph which describes the syntactic structure of the diagram.
The derivation structure - similar to context-free string grammars - is a
derivation tree if a context-free hypergraph grammar is used (for context-free
hypergraph grammars with embeddings, it is a directed acyclic graph, the
derivation DAG [17,10]). The tree root represents the nonterminal edge of
the starting hypergraph, and the (terminal) edges of the reduced hypergraph
5 Plain context-free grammars with embeddings may be too restricted for some
diagram languages, e.g., UML class diagrams [17]. However, DiaGen allows to
restrict productions by application conditions. With this feature, DiaGen can be
applied to real-world languages like Statecharts and UML class diagrams [18,6].
begin(a) BlockSeq(a,i)
statement(a,b)
Block(b,c)
statement(b,c)
BlockSeq(h,c)
Block(h,c)
statement(h,c)
statement(e,i)
Flowchart
Fig. 8. Derivation tree of the reduced hypergraph model of Fig. 6 according to the
grammar of Fig. 7 when omitting any conn edge.
model are represented as leaves of the tree. Fig. 8 shows the derivation tree of
the reduced hypergraph model of Fig. 6. Any conn edge, however, has been
omitted for simplicity. Edges are written as their edge labels together with the
labels of their visited nodes in parentheses.
3.4 Attribute evaluation
The task of the final step of diagram analysis is translating the diagram into
some data structure which is specific for the application domain where the
diagram editor is used. If, e.g., the flowchart editor is used as part of a programming
tool, it should probably create some textual representation of the
flowchart. for that purpose, DiaGen uses a common syntax-directed translation
mechanism based on attribute evaluation similar to those of attribute
string grammars [13]: Each hyperedge carries some attributes. Number and
types of these attributes which have to be specified by the editor developer
depend on the hyperedge label. Productions of the hypergraph grammar may
be augmented by attribute evaluation rules which compute values of some attributes
that are accessible through those edges which are referred to by the
production.
After parsing, attribute evaluation works as follows: Each hyperedge which
occurs in the derivation tree (or DAG in general) has a distinct number of
attributes; grammar productions which have been used for creating the tree
impose rules how attribute values are computed as soon the value of others
are known. Some (or even all) attribute values of terminal edges are already
known; they have been derived from attributes of the diagram components
during the reducing step (This feature has been omitted in Section 3.2). The
attribute evaluation mechanism of the editor then computes a valid evaluation
order. Please note that DiaGen does not require a specific form of attributed
definition like S- or L-attributed definitions [13]. At least when dealing with
derivation DAGs these forms would fail. The editor developer, therefore, is
allowed to define evaluation rules rather freely for each grammar production,
and the evaluation mechanism has to determine an evaluation order for each
diagram analysis run anew. Of course, the developer has to be careful in order
not to introduce inconsistencies or cyclic attribute dependencies.
Syntax-directed translation in the context of flowcharts is rather simple. An
obvious data structure representing a flowchart is textual program, e.g., in
Pascal-like notation which is possible since syntactically flowcharts are well
structured (at least when using the hypergraph grammar as shown in Fig. 7).
for that purpose, each hyperedge needs a single attribute of type String : the
terminal hyperedges contain the text of their corresponding diagram components
whereas the nonterminal hyperedges contain the program text of their
sub-diagram. Attribute evaluation rules are straight-forward.
Attribute evaluation is the last step of diagram analysis when editing diagrams
by free-hand editing. The following section shows that syntax-directed editing
is seamlessly integrated into DiaGen which means that editors make use of the
diagram analysis as it has been described above even when editing diagrams
in a syntax-directed way.
Syntax-directed Editing
As discussed in the introduction, syntax-directed editing has several important
benefits. Other approaches for free-hand editing which do not make use of abstract
internal models (e.g., the Penguins system being based on constraint
multiset grammars [3,20]) cannot extend free-hand editing by syntax-directed
editing, which requires such an abstract model. But since the DiaGen approach
is based on such a model (the hypergraph model), it is quite obvious
to o#er syntax-directed editing, too. However, free-hand editing using a
parser requires that the hypergraph grammar remains the only syntax description
of the reduced hypergraph model and thus the diagram language.
Syntax-directed editing operations must not change the syntax of the diagram
language; they can only o#er some additional support to the user. This
requirement has two immediate consequences:
. It is possible to specify editing rules that deliberately transform a correct
diagram into an incorrect one with respect to the hypergraph grammar. This
might appear to be an undesired feature; but consider the process of creating
a complex diagram: the intermediate "drawings" need not, and generally do
not make up a correct diagram, only the final "drawing". In order to support
those intermediate incorrect results, syntax-directed editing operations have
to allow for such "disimprovements", too.
. Editing operations are quite similar to macros in o#-the-shelf text and
graphics editors; they combine several actions, which can also be performed
by free-hand editing, into one complex editing operation. However, syntax-directed
editing rules are actually much more powerful than such macros
which o#er only recording of editing operations and their playback as a
complex operation: syntax-directed editing operations also take care of providing
a valid diagram layout where this is possible (incorrect diagrams in
general have no valid layout.) Furthermore, editing operations can take into
account context information, and they may have rather complex application
conditions.
This makes use of graph transformation an obvious choice for adding syntax-directed
editing to the free-hand editing mode: Editing operations are specified
by hypergraph transformations on the hypergraph model as shown in Fig. 2.
In the following it is explained why hypergraph transformations may have
to use information from the reduced hypergraph model and the derivation
structure, too. Whenever the hypergraph model has been changed by some
transformation, it has to be parsed again. The results of the parser are then
used to indicate correct subdiagrams and to create a valid layout for them
(cf. Section 5). Please note that the hypergraph model is directly modified by
the transformation rules; the modeling step, which is necessary for free-hand
editing, does not take place.
In the following, two examples of editing operations for a flowchart editor
are used for describing specification and realization of syntax-directed editing
operations. The first example demonstrates the use of simple hypergraph
transformation rules whereas the second one shows why additional information
from the reduced hypergraph model as well as the derivation DAG may
be necessary.
4.1 Example 1: Simple hypergraph transformation rules
Fig. 9 shows an example of a syntax-directed editing operation which adds
a new statement below an existing one in a flowchart editor. The situation
just before applying the editing operation is depicted in Fig. 9a. The topmost
statement has been selected which is indicated by a thick border and gray
handles; the editing operation whose hypergraph transformation rule is shown
in Fig. 9b adds a new statement just below this selected one. The result is
shown in Fig. 9c.
The hypergraph transformation rule in Fig. 9b is depicted as before: LHS and
RHS are separated by "#", corresponding edges and nodes of LHS and RHS
carry the same labels. Host nodes and edges which match the LHS without
a) b)2
x
a
flowOut
box
flowOut
flowOut
flowIn
vArrow
box
x
a
a
b22
a
e
c)
Fig. 9. A syntax-directed editing operation which inserts a new statement below a
selected one.
an identically labeled counterpart in the RHS are removed when applying
the rule. The marked box hyperedge of the LHS indicates that this edge has
to match the hypergraph model edge of the diagram component which has
been selected by the editor user. When applied, this rule removes the flowOut
relation edge which connects the selected statement box with an outgoing
line or arrow (which is not specified here); a new vertical arrow and a new
statement box together with some relation edges are added. After applying
the rule, the resulting hypergraph is reduced and parsed (cf. Fig. 2). The
layouter can then properly layout the resulting diagram which now contains
a new statement box (this box carries the default text "Action" in Fig. 9c.)
Fig. 10 shows the concrete specification of this simple editing operation together
with its transformation rule. In DiaGen, syntax-directed editing operations
are specified in terms of simple rules and complex operations quite
similar to rules and transformation units in GRACE [21] as shown in the
following.
A rule (add rule in Fig. 10) is specified as its LHS (as a list of edges) and how
its RHS "di#ers" from its LHS, i.e., which edges are removed (indicated by -)
and which ones are added (indicated by +) by the rule. Each hyperedge is again
written as its edge type together with its visited nodes in parentheses. The
node hyperedges are special: they are actually pseudo edges which allow to
refer to nodes with the same notation as edges. The LHS in Fig. 10 consists of a
box edge, a flowOut edge, some nodes and a node pseudo edge which is used to
rule add_rule:
box(_,a) f:flowOut(a,b) n:node(a)
do -f
{ OperationSupport.createVArrow(n) }
{ OperationSupport.createBox(n) }
operation add_stmt_after_stmt "Add statement" :
specify box b "select statement"
do add_rule(b);
Fig. 10. DiaGen specification of adding a statement below another statement.
refer to node a. Applying the rule removes the flowOut edge (indicated by -f
where f is the edge reference introduced in the LHS). Furthermore, a vArrow
instance etc. are added to the hypergraph model. The Java methods in curly
braces are responsible for creating the corresponding diagram components,
i.e., a vertical arrow and a statement box.
Each syntax-directed editing operation is specified by a complex operation
defined in terms of rules; a control program describes how the operation is
defined by a sequence of rules or more complex control structures. Control
programs in DiaGen have been inspired by [21] and [22], but their semantics
is much simpler because backtracking is not performed [10]. Fig. 10 shows
the operation add stmt after stmt which uses the trivial control program
that simply calls a single rule. The operation of Fig. 10 requires a statement
box as parameter b (indicated by specify box b. ) and which simply calls
the add rule rule that has been described above. The parameter b that is
passed to this rule simply defines a partial match when applying this rule.
The corresponding formal parameters are the first edges which are specified
in the LHS of the "invoked" rule.
An important issue of syntax-directed editing is the question how to select
those parts of the diagram that are a#ected by the application of an editing
operation. In DiaGen, this has been solved by adding parameters to complex
operations (indicated by specify box b. in Fig. 10). When the user selects
an editing operation for application, the editor requests the user to specify a
single diagram component for each of the parameters of the operation. The
hyperedges that internally represent these components specify a partial match
which is then used to select where the operation and its rules have to be ap-
plied. DiaGen simplifies this user interaction process: When a diagram component
is selected, the editor o#ers those editing operations to the user which
require a diagram component of the selected type as a first parameter. When
the user selects one of those operations, the editor asks for the missing param-
eters. However, many operations, e.g., the add stmt after stmt operation,
require just a single parameter, i.e., no further user interaction is necessary
after selecting the operation.
4.2 Example 2: Utilizing additional information
The former example has been rather simple in the sense that its operation
can be described with just a single transformation rule. Furthermore, it uses
only information which is readily available in the hypergraph model. This
subsection outlines that editing operations are in general more complicated
and have to use additional information beyond the plain hypergraph model.
Fig. 11 shows such an operation in action with screenshots just before and after
a) b)
Fig. 11. A syntax-directed editing operation which removes a conditional block.
applying it. 6 Its task is removing a conditional block which the user has chosen
by selecting its condition diamond. Unlike the former example, the number
of edges which have to be removed is unknown when the operation is being
specified. It is, moreover, di#cult to decide whether a diagram component
and its hyperedge belong to the conditional block when solely considering the
hypergraph model. However, since this is a problem of diagram syntax, it is
quite an easy task when also using syntactic information from the last parsing
step: The operation has to remove all leaves of the Block(d, h)-subtree of the
derivation tree in Fig. 8.
The crucial task of the editing operation is thus to find the Block(d, h)-node
of the derivation tree and - from there - all terminal hyperedges which can
be reached by paths from this tree node. Finally, their corresponding component
edges as well as diagram components have to be identified. Apparently,
editing operations have to take into account information which has been collected
during diagram analysis, i.e., information from the reduced hypergraph
model and from the derivation structure (cf. Fig. 2). DiaGen editors make
this information available by so-called cross-model links which connect corresponding
nodes and edges of hypergraph model, reduced hypergraph model,
and derivation DAG. Path expressions allow to specify how to navigate in and
between models using these cross-model links. for our sample operation, this
is shown in Fig. 12 which, because of lack of space, does neither show these
path expressions nor the hypergraph model, but only the diagram, its reduced
hypergraph model, and its (simplified) derivation tree (cf. Figures 11a, 6,
and 8). Thick arrows indicate how models are used to find, starting from the
selected condition diamond, those terminal statement hyperedges which belong
to the conditional block. Dashed edges show how they correspond to the
diagram components (resp. their component hyperedges which are omitted
here) which have to be removed from the diagram.
Please note that not only statement boxes and condition diamonds have to be
removed by this operation, but also lines and arrows. In order to also match
them by path expressions, these components must have been represented in
the reduced hypergraph model as well as in the derivation tree. This was the
6 Actually, Fig. 11a shows the same diagram as Fig. 3, but with a condition selected.
conn
conn
conn
conn
conn
conn
conn
conn
conn conn
conn
conn
conn
conn
conn
conn conn
conn
conn
statement
d
c
e
a
begin(a) BlockSeq(a,i)
statement(a,b)
Block(b,c)
statement(b,c)
BlockSeq(h,c)
Block(h,c)
statement(h,c)
statement(e,i)
Flowchart
statement
statement
statement
condition
condition
statement
statement
begin
Fig. 12. Using cross-model information for editing operations
reason for using the conn and Conn edges which, for clarity, have been omitted
in Section 3 and also in the derivation tree of Fig. 12.
5 Automatic Layout
As it has become clear in the previous section, transformations on the hypergraph
model modify the structure of the internal model, but they do not
describe their e#ects on the position or the size of the diagram components; an
automatic layout mechanism which considers the diagram syntax is needed.
DiaGen o#ers two kinds of automatic layout support:
Tailored layout modules can be programmed by hand. Such a layout is connected
to diagram analysis by a generic Java interface to attribute evaluation
(cf. Fig. 2). Information about the syntactic structure of the diagram has to be
prepared by syntax-directed attribute evaluation first. The layout module then
uses this information to compute a diagram layout. Some generic layout modules
have been realized already, e.g., a force-directed layout algorithm (cf. [23])
which is used in a Statechart as well a UML class diagram editor [18,6].
Programming such a layout module by hand is quite complicated. for reducing
this e#ort, DiaGen o#ers constraint-based specification of diagram layout and
computing diagram layout by a constraint solver as in earlier work of ours [24]:
The main idea is to describe a diagram layout in terms of values which are
assigned to the attributes of the diagram components (e.g., their position). A
valid diagram layout is specified by a set of constraints on these attributes;
the constraint set is determined by the syntactic structure of the diagram
similar to the syntax-directed translation by attribute evaluation: Hyperedges
of the hypergraph model and terminal as well as nonterminal hyperedges of the
reduced hypergraph model carry additional layout attributes, and reduction
step rules as well as grammar productions are augmented by constraints on
their accessible attributes. These constraints are added to the set of constraints
which specify a diagram layout whenever the corresponding rule or production
is instantiated during the reduction step or parsing process.
It is important to define layout constraints not only in the hypergraph grammar
which is used during the parsing step, but also in the rule set which
specifies the reduction step (cf. Fig. 2). This is so because the reduction step
may "reduce away" the explicit representation of some specific diagram components
(e.g., lines in our flowchart example). If we had restricted specification
of layout constraints to the hypergraph grammar, we would not be able to describe
the layout of those diagram components. for flowcharts, e.g., constraints
have to require a minimum length of lines and arrows.
Automatic layout is not restricted to syntax-directed editing. The same information
is also available during free-hand editing. Editors being specified
and generated by DiaGen therefore o#er an intelligent diagram mode where
diagram components may be modified arbitrarily, but the other components,
especially their position, may be a#ected by these modifications, too. The
layouter takes care of modifying the overall appearance of the diagram such
that its syntax is preserved and the layout beautified. This work on intelligent
diagrams is similar to the approach by Chok, Marriott, and Paton [20].
6 Conclusions
This paper has presented DiaGen, a rapid-prototyping tool based on hypergraph
transformation for creating diagram editors that support free-hand
editing as well as syntax-directed editing. By supporting both editing modes
in one editor, it combines the positive aspects of both modes, i.e., unrestricted
editing capabilities and convenient syntax-directed editing. The approach has
proven to be powerful and general in the sense that it supports quick prototyping
of diagram editors and does not restrict the class of diagram languages
which it can be applied to. This has been demonstrated by several
diagram languages for which diagram editors have already been generated,
e.g., flowcharts, Nassi-Shneiderman diagrams [19], syntax diagrams [16], a visual
#-calculus [25], ladder diagrams [26], MSC [17], UML class diagrams,
signal interpreted Petri nets and SFC diagrams [27].
The approach which has been presented in this paper appears to be quite similar
to the approach of Rekers and Schurr [5] which has already been outlined
in Section 1. Both approaches make use of two hypergraphs resp. graphs. The
spatial relationship graph (SRG) in Reker's and Schurr's approach is quite
similar to the hypergraph model of DiaGen. But their abstract syntax graph
(ASG), which represents the abstract meaning of the diagram, has been introduced
for a di#erent reason than the reduced hypergraph model of Dia-
Gen: hypergraph models (and also SRGs) are generally quite complicated such
that there is no hypergraph parser which can analyze an hypergraph model.
Therefore, DiaGen reduces the hypergraph model and parses the much simpler
reduced hypergraph model instead of the hypergraph model. As we have
demonstrated, parsing of the reduced hypergraph model can be performed efficiently
[10]. However in Reker's and Schurr's approach, SRG and ASG are
always strongly coupled since they use triple graph grammars for defining the
syntax of the SRG and the ASG with one formalism; the ASG has not been
introduced for reducing complexity. Instead, a graph grammar parser has to
analyze the SRG directly; the ASG (i.e., the abstract meaning of the diagram)
is not parsed, it is created as a "side-e#ect" of the parsing of the SRG during
free-hand editing. The requirement for a graph parser for the SRG imposes a
strong restriction on this approach.
The concepts of this paper have been implemented with constraint-based automatic
layout based on the constraint solver QOCA by Chok and Marriott [20].
Their Penguins system also allows to generate free-hand editors, however
they do not generate an internal model, but use constraint multiset grammars
(CMGs) [3]. The hypergraph grammar approach of DiaGen appears to be
better suited to the problem since they report a performance that is about two
orders of magnitude worse than the performance of DiaGen editors on comparable
computers. Furthermore, their system cannot support syntax-directed
editing since they do not use an intermediate internal model.
As the examples of syntax-directed editing operations suggest, it appears to
be unsatisfactory to some extent to specify syntax-directed editing operations
on the less abstract hypergraph model instead of the reduced one which appears
to be better suited for syntax-directed editing (cf. Reker's and Schurr's
approach [5]). However, since the mapping from the hypergraph model to the
reduced one is non-injective, the approach which has been presented in this
paper does not leave much choice if expressiveness should not be sacrificed.
However, future work will investigate where specifying syntax-directed editing
operations on the more abstract hypergraph model is su#cient.
--R
GenGEd: A generic graphical
Graph grammars and diagram editing
Automatic construction of user interfaces from constraint multiset grammars
VisPro: A visual language generation toolset
A graph based framework for the implementation of visual environments
DiaGen web site http://www2.
Algebraic approaches to graph transformation - part I: Basic concepts and double pushout approach
Computing by graph rewriting
Specifying and generating diagram
Graph based modeling and implementation with EER/GRAL
Grammars and Languages
Hyperedge replacement graph grammars
Flowchart techniques for structured programming
Pascal User Manual and Report
Application of graph transformation to visual languages
Diagram editing with hypergraph parser support
Programmed graph replacement systems
An experimental comparison of force-directed and randomized graph drawing algorithms
Specification of diagram
Automatically generating environments for dynamic diagram languages
Creating semantic representations of diagrams
International Standard 61131 A: Programmable Logic Controllers
Handbook of Graph Grammars and Computing by Graph Transformation
--TR
Compilers: principles, techniques, and tools
Handbook of graph grammars and computing by graph transformation
Hyperedge replacement graph grammars
Algebraic approaches to graph transformation. Part I
Algebraic approaches to graph transformation. Part II
Programmed graph replacement systems
Graph transformation for specification and programming
Application of graph transformation to visual languages
Hyperedge Replacement
Pascal-User Manual and Report
Creating Semantic Representations of Diagrams
Graph Based Modeling and Implementation with EER / GRAL
An Experimental Comparison of Force-Directed and Randomized Graph Drawing Algorithms
Graph grammars and diagram editing
Automatic construction of user interfaces from constraint multiset grammars
A graph based framework for the implementation of visual environments
Diagram Editing with Hypergraph Parser Support
VisPro
Automatically Generating Environments for Dynamic Diagram Languages
GenGEd - A Generic Graphical Editor for Visual Languages based on Algebraic Graph Grammars
Constraint-Based Diagram Beautification
Flowchart techniques for structured programming
--CTR
Ewa Grabska , Andrzej achwa , Grazyna Slusarczyk , Katarzyna Grzesiak-Kopec , Jacek Lembas, Hierarchical layout hypergraph operations and diagrammatic reasoning, Machine Graphics & Vision International Journal, v.16 n.1, p.23-38, January 2007
O. G. Sharov , A. N. Afanas'Ev, Syntax-Directed Implementation of Visual Languages Based on Automaton Graphical Grammars, Programming and Computing Software, v.31 n.6, p.332-339, November 2005
Mark Minas, Syntax analysis for diagram editors: a constraint satisfaction problem, Proceedings of the working conference on Advanced visual interfaces, May 23-26, 2006, Venezia, Italy
Hans Vangheluwe , Juan de Lara, Foundations of multi-paradigm modeling and simulation: computer automated multi-paradigm modelling: meta-modelling and graph transformation, Proceedings of the 35th conference on Winter simulation: driving innovation, December 07-10, 2003, New Orleans, Louisiana
Gennaro Costagliola , Vincenzo Deufemia , Giuseppe Polese, Visual language implementation through standard compiler-compiler techniques, Journal of Visual Languages and Computing, v.18 n.2, p.165-226, April, 2007
Frank Drewes , Berthold Hoffmann , Mark Minas, Context-exploiting shapes for diagram transformation, Machine Graphics & Vision International Journal, v.12 n.1, p.117-132, January
Frank Drewes , Berthold Hoffmann , Detlef Plump, Hierarchical graph transformation, Journal of Computer and System Sciences, v.64 n.2, p.249-283, March 2002
Berthold Hoffmann, Abstraction and control for shapely nested graph transformation, Fundamenta Informaticae, v.58 n.1, p.39-65, November
Berthold Hoffmann, Abstraction and Control for Shapely Nested Graph Transformation, Fundamenta Informaticae, v.58 n.1, p.39-65, January
Jun Kong , Kang Zhang , Xiaoqin Zeng, Spatial graph grammars for graphical user interfaces, ACM Transactions on Computer-Human Interaction (TOCHI), v.13 n.2, p.268-307, June 2006
Gennaro Costagliola , Vincenzo Deufemia , Giuseppe Polese, A framework for modeling and implementing visual notations with applications to software engineering, ACM Transactions on Software Engineering and Methodology (TOSEM), v.13 n.4, p.431-487, October 2004 | diagram editors;hypergraph grammar;hypergraph transformation;rapid prototyping |
607196 | A Hybrid Index Technique for Power Efficient Data Broadcast. | The intention of power conservative indexing techniques for wireless data broadcast is to reduce mobile client tune-in time while maintaining an acceptable data access time. In this paper, we investigate indexing techniques based on index trees and signatures for data disseminated on a broadcast channel. Moreover, a hybrid indexing method combining strengths of the signature and the index tree techniques is proposed. Different from previous studies, our research takes into consideration of two important data organization factors, namely, clustering and scheduling. Cost models for the three indexing methods are derived for various data organization accommodating these two factors. Based on our analytical comparisons, the signature and the hybrid indexing techniques are the best choices for power conservative indexing of various data organization on wireless broadcast channels. | Introduction
Due to resource limitations in a mobile environment, it is important to efficiently utilize wireless bandwidth and
battery power in mobile applications. Wireless broadcasting is an attractive approach for data dissemination in a
mobile environment since it tackles both bandwidth efficiency and power conservation problems [BI94, IVB96,
SRB97, HLL98c]. On one hand, data disseminated through broadcast channels allows simultaneous access by
an arbitrary number of mobile users and thus allows efficient usage of scarce bandwidth. On the other hand,
the mobile computers consume less battery power when passively monitoring broadcast channels than actively
interacting with the server by point-to-point communication.
Three criteria are used in this paper to evaluate the data access efficiency of broadcast channels:
ffl Access Time: the average time elapsed from the moment a client 1 issues a query to the moment when all the
requested data frames are received by the client,
The author is now with Department of Computer Science, University of Waterloo, Waterloo, Ontario, Canada
1 In this paper, we use 'client' or 'mobile client' to refer to a user with a mobile computer.
ffl Tune-in Time: the period of time spent by a mobile computer staying active in order to obtain the requested
data.
Indexing Efficiency: The tune-in time saved per unit of access time overhead for indexing 2 .
While access time measures the efficiency of access methods and data organization for broadcast channels, tune-in
time is frequently used to estimate the power consumption by a mobile computer. Indexing efficiency, which
correlates the access time and tune-in time, is used to evaluate the efficiency of indexing techniques in terms
of minimizing the tune-in time while maintaining an acceptable access time overhead. In other words, a power
conservative indexing technique has to balance out the index overhead (in terms of access time increased) and the
time saved in order to maximize the indexing efficiency.
To facilitate efficient data delivery on broadcast channels, scheduling and clustering are frequently used to
select and organize data for broadcast. Broadcast scheduling policies determine the content and organization of
data broadcasting based on aggregate user data access patterns. Broadcast disk [AAFZ95] is one of the well-known
broadcast scheduling methods. In contrast, flat broadcast refers to the broadcast scheduling where each
data frame is broadcast once in every cycle [HLL98c]. When all frames with the same attribute value are broadcast
consecutively, the data broadcast are called clustered on that attribute 3 . In contrast, the data broadcast are non-clustered
on an attribute when all frames with the same value of that attribute are not broadcast consecutively.
Clustering allows continuous reception of data with a specific attribute value.
Several indexing techniques for broadcast channels have been discussed in the literature[IVB96, LL96b, CYW97,
SV96]. The basic idea behind these techniques is that, by including information about the arriving schedule of
data frames in the broadcast channels, mobile computers are able to predict the arrival time of the requested data
frames and thus, selective tuning in can be realized. Signature and index tree techniques [IVB96, LL96b] represent
two different classes of indexing methods for broadcast channels. According to [IVB96, LL96b], the index
tree method is based on clustered data organization, while the signature methods don't presume a clustered data
organization 4 . Moreover, indexing techniques used in these papers only took flat broadcast into consideration.
In [IVB96] index frames were treated in the same way as a data frame, although a better approach is to separate
index frames from data frames. [LL96b] did not consider the clustered data. Although the authors demonstrated
that the signature size played an important role in terms of data filtering efficiency and access latency, the optimal
signature size was not given.
In this paper, we extend the existing works further. For the index tree method, since the size of an index frame
is normally not equal to that of a data frame, to accurately estimate the access time and the tune-in time, we
separate the index frames from data frames. For the signature method, we derive formulae to estimate the optimal
signature size. In addition, scheduling and clustering are considered together with the index methods. The tune-in
time and the access time cost formulae are developed to cover the cases: i) flat scheduling and clustered; ii) flat
scheduling and non-clustered; and iii) broadcast disks scheduling and clustered. To our knowledge, there is no
systematical study and comparison of the power conservative indexing techniques in the literature which takes both
clustering and scheduling issues into account. In addition, we propose a new indexing method, called hybrid index,
which takes strengths of both index tree and signature methods. Those three index methods are evaluated based
on our criteria, namely, access time, tune-in time and indexing efficiency. Our results show that clustering and
scheduling have major impacts on data organization of wireless data broadcast. We also conclude that the hybrid
and signature methods give superior performance to the index tree method for various broadcast data organization
accommodating the clustering and scheduling factors.
Every indexing technique usually introduces non-zero access time overhead.
3 In this paper, we only consider the case of single attribute indexing and clustering. Issues involving multiple attribute indexing and
clustering are addressed in [HLL98b].
4 Even so, the integrated signature and multi-level signature schemes can benefit from a clustered data organization for broadcast
channels.
The rest of the paper is organized as follows. Section 2 gives an informal introduction of the broadcast channels,
indexing techniques and system parameters used in performance evaluation and comparisons. In Section 3, indexing
techniques based on index tree and signature methods are re-examined by taking the clustering and scheduling
factors into consideration. In Section 4, a hybrid index scheme and corresponding cost models for access time and
tune-in time are developed. Section 5 evaluates the indexing techniques in terms of tune-in time, access time, and
indexing efficiency. Section 6 is a review of related work. Finally, Section 7 concludes the paper.
2 The Data Organization for Wireless Broadcast
In this section, we briefly introduce the concept of broadcast channels and some of the terminologies used. We
assume that a base station is serving the role of an information server which maintains various kinds of multimedia
data, including texts, images, audio/video and other system data. The server periodically broadcasts, on a specific
channel, data of popular demands as a series of data frames to a large client population. These data frames vary in
size and each frame consists of packets which are the physical units of broadcast. The header of a frame contains
signals for synchronization and meta-information such as the type and the length of the frame. Logically the data
frames are classified into two types: record frames and index frames, where record frames contain data items and
index frames contain indexing information such as index tree nodes or signatures for a set of data items. Those
two types of frames are interleaved together in a cycle. The clients retrieve the frames of their interest off the air
by monitoring the broadcast channel. Since a set of data frames is periodically broadcast, a complete broadcast of
the set of data frames is called a broadcast cycle. The organization of data frames in a broadcast cycle is called a
broadcast schedule.
Data organization on the broadcast channels have great impacts on data access efficiency and power consump-
tion. Data frames can be clustered based on attributes. Based on the results in [IVB96], index tree techniques result
in more efficient access for clustered information than non-clustered one. In this paper, we take both clustered and
non-clustered data organization into consideration. Generally speaking, a non-clustered data organization can be
divided into a number of segments, called meta segments [IVB96], with non-decreasing or non-increasing values
of a specific attribute. These meta segments can be considered as clustered and thus the indexing techniques for
clustered data can be applied to them. To facilitate our study, we use the scattering factor M , the number of meta
segments in the data organization, to model the non-clusterness of a data organization 5 .
Since access latency is directly proportional to the volume of data being broadcast, the volume of data in a
broadcast cycle should be limited in such a way that only the most frequently accessed data frames are broadcast
on the channel while the remaining data frames can be requested on demand through point-to-point connections
[HLL98c, AFZ97, SRB97]. The server must determine the set of data frames to be broadcast by collecting statistics
about data access patterns.
Due to some timely events, the client access pattern sometimes shows skewed distributions, which may be
captured by Zipf or Gaussian distribution functions. In this case, scheduling data frames in broadcast disks (refer
to Appendix and [AAFZ95] for detail) can achieve a better performance in terms of the access time and is a very
important technique. As indicated in [AAFZ95], in addition to performance benefits, constructing a broadcast
schedule on multi-disks can give clients the estimated time when a particular data frame is to appear on the
broadcast channel. This is particularly important for selected tune-in, data prefetching [AFZ96b], hybrid pull
and push technique [SRB97, AFZ97], and updates [AFZ96a]. Therefore, the application of the index methods to
broadcast disks is studied in this paper.
The broadcast disks method has better access time when the data frames with the same attribute values are
clustered in one of the minor cycles. By receiving the cluster of data frames together, the mobile computer can
answer the query without continuing to monitor the rest of broadcast cycle. This can be achieved by placing
all of the data frames with the same indexed attribute value as a cluster on the same broadcast disk. The whole
5 To simplify our discussion, we neglect the variance of the meta segment size.
cluster of data frames are brought to the broadcast channel as a unit. Depending on speed of broadcast disks where
this cluster is located, these data frames may appear several times in minor cycles. Thus, the resulting broadcast
cycle is different from the completely clustered broadcast cycle. For broadcast scheduling adopting broadcast
disks without using clustering, we simply consider the resulting broadcast cycle as non-clustered. In that case,
broadcast disks lose their advantages over flat broadcast. Thus, when we consider index techniques for broadcast
disks in the later sections of this paper, we only consider the clustered case. We assume only one broadcast channel
since a channel with large bandwidth is logically the same as multiple channels with combined bandwidth of the
same capacity. Moreover, it incurs smaller overheads of administrating than multiple channels. With the same
token, we assume that index information is disseminated on the same broadcast channel. Finally, we assume that
updates are only reflected between cycles. In other words, a broadcast schedule is fixed before a cycle begins.
D Number of information frames (excluding index frames) in the broadcast cycle
F Number of distinct information frames in the broadcast cycle
Average number of packets in an information frame
S Selectivity of query: average number of frames containing the same attribute value
Scattering factor of an attribute, which is the number of meta segments of the attribute
Table
1. System Parameter Setting
Table
1 gives the parameters which describe the characteristics of a broadcast cycle. The cost models for the
various index methods discussed later in this paper are derived based on these parameters. Although the sizes of
data frames may vary, we assume frames to be a multiple of the packet size. Both access time and tune-in time
are measured in terms of number of packets. Before we develop the cost models for various index methods in
broadcast disks, we derived a theorem 6 for the optimal broadcast scheduling based on multiple broadcast disks
(please refer to Appendix for proof). The broadcast schedules derived from the theorem is used in our analysis
later.
Given the number of data frames to be broadcast, D, the number
of disks, N , the size of disk
and the broadcast frequency 7 of disk i,
broadcasting a data frame d on disk i, 8i fixed inter-arrival time
can achieve the optimal access time. In that case, to retrieve data frame d from disk i, client needs to scan, on an
average, D
frames.
3 Basic Indexing Techniques
In this section we discuss the basic ideas behind the index tree and the signature methods. We describe the
distributed indexing and integrated signature techniques because they are the best methods of their class for single
attribute indexing. The analytical cost models for the access time and the tune-in time for clustered and non-clustered
data broadcast are presented. Moreover, the application of these index techniques to broadcast disks is
also considered. Due to space limitations, we don't give all the derivations of these cost models. Interested readers
can refer to [IVB96] and [LL96b] for more details.
the leaves of an index tree. The access method for retrieving data frames with an index tree technique involves the
following steps:
ffl Initial probe: The client tunes into the broadcast channel and determines when the next index tree is broadcast
The client follows a list of pointers to find out the arrival time of the desired data frames. The
number of pointers retrieved is equal to the height of the index tree.
ffl Retrieve: The client tunes into the channel and downloads all the required data frames.
Figure
1 depicts an example of an index tree for a broadcast cycle which consists of 81 data frames [IVB96].
The lowest level consists of square boxes which represent a collection of three data frames. The index tree is
shown above the data frames. Each index node has three pointers 8 .
a2 a3
Non Replicated
Replicated Part
I
Figure
1. A Full Index Tree
h Height of the whole index tree
t Number of upper levels in the index tree that are replicated
T Number of packets in an index tree node
n Number of search keys plus pointers that a node can hold
Table
2. Parameter Setting for Index Tree Schemes
Table
2 gives the parameter setting for the index tree cost model. To reduce access time while maintaining a
similar tune-in time for the client, the index tree can be replicated and interleaved with the information. Distributed
indexing is actually one index replication and interleaving method. The index tree is broadcast every 1
d of the file
during a broadcast cycle. However instead of interleaving the entire index tree d times, only the part of the index
tree which indexes the data block immediately following it is broadcast. The whole index tree is divided into two
parts: replicated and non-replicated parts. The replicated part constitutes the upper t levels of the index tree and
each node in that part is replicated a number of times equal the number of children it has, while the non-replicated
part consists of the lower and each node in this part appears only once in a given broadcast cycle.
Since the lower levels of an index tree take up much more space than the upper part (i.e., the replicated part of the
index tree), the index overheads can be greatly reduced if the lower levels of the index tree are not replicated. In
this way, access time can be improved significantly without much deterioration in tune-in time.
To support distributed indexing, every frame has an offset to the root of the next index tree. The first node of
each distributed index tree contains a tuple, with the first field as the primary key of the record that was broadcast
last and the second field as the offset to the beginning of the next broadcast cycle. This is to guide the clients that
8 For simplicity, the three pointers of each index node in the lower most index tree level is represented by just one arrow.
have missed the required record in the current cycle to tune to the next broadcast cycle. There is a control index
at the beginning of every replicated index to direct the client to a proper branch in the index tree. This additional
index information for navigation together with the sparse index tree provides the same function as the complete
index tree.
We assume that each node of the index tree takes up T packets and X[h] and X[t], respectively, are the total
number of nodes of the full index tree and the replicated part of the index tree. The number of nodes in the i-th
level of the index tree is denoted as L[i].
In the distributed index tree, each node, p, in the replicated part is repeated as many times as the number of
children that p has. Thus, the root is broadcast L[2] times and nodes at level 2 are broadcast L[3] times etc. For
the index tree in Figure 1, since each node has three children, the root and nodes at level 2, i.e., a 1 , a 2 , and
a 3 , are broadcast 3 and 9 times, respectively. Therefore, in a broadcast cycle, the total number of nodes in the
replicated part is
1. Additionally, the number of index nodes that are located below
the t-th level of the index (i.e., the non-replicated) is, Hence, the total number of index nodes in a
cycle is, which equals to X[h] As a result, the index overhead is
In the above discussion, we assumed that the file is clustered. For a non-clustered broadcast cycle, we can still
apply index tree techniques to each meta segment. Instead of using one index tree for the entire broadcast cycle, an
index tree is created for each meta segment. However, each index tree indexes all the values of the non-clustered
attribute rather than indexing just the attribute values that appear in the current meta segment. For attribute values
that do not appear in the current meta segment, a pointer in the index tree points to the next occurrence of the data
frame with the desired attribute values. Thus, there are M distinct index trees for a broadcast cycle consists of M
meta segments, The total overhead for putting index trees in a broadcast cycle is T
packets.
To simplify the cost models, we average the index tree overhead to each data frame so that the size of a frame
is considered to consist of a data part and an index overhead part. Of course, the actual index tree overhead for
each data frame is different, but from the statistics point of view we can assume that all data frames have the same
average index tree overhead. The average overhead for each data frame is
The replicated index tree part is broadcast every 1=L[t fraction of each meta segment. Therefore, a broadcast
cycle is divided into M \Delta L[t data blocks with replicated index nodes at the beginning of each block. Let P
be the average number of packets for a data frame, the length of each block is
Flat Broadcast: Let us derive the access time and the tune-in time estimates for flat broadcast first. Since each
frame is broadcast once in a cycle, the number of data frames in the broadcast, D, is equal to the number of distinct
frames F . The initial probe period is the time to reach the index frame at the beginning of the next data block and
can be estimated as:
For a clustered broadcast, the scattering factor, and the expected number of data frames before the
arrival of the desired frames is Hence, the access time is:
initial probe time +waiting time before first desired frame arrives
waiting time for retrieving all desired frames in the broadcast
For a non-clustered broadcast cycle (M ? 1), the access time is:
waiting time for retrieving all desired frames in the broadcast
The tune-in time for both clustered and non-clustered broadcast cycle depends on the initial probe, the scanning of
index tree, the extra scanning of index tree in subsequent meta segments, and the retrieval of S data frames. Thus,
the tune-in time of the index technique is upper bounded by:
For a fully balanced indexing tree, the height of the tree, the number of nodes at i-th level, and the number of
nodes in the upper t levels of the index tree are:
F
According to [IVB96], the optimal height of the replicated part of the index tree for a broadcast, denoted as - t, can
be estimated as:
(log
F
while for a non-clustered broadcast cycle, the optimal number of replicated levels - t within a meta segment is:
(log
Broadcast Disks: For broadcast disks, as discussed in Section 2, data frames with the same attribute values are
clustered in one minor cycle. In this case, we can treat each minor cycle of the broadcast disks as a meta segment 9 .
An index tree can be built for each minor cycle. Similar to flat broadcast, the initial probe period, the time to reach
the index frame at the beginning of the next data block, can be estimated as:
where the number of data frames in the broadcast is
and the scattering factor, M , is equal to the
number of minor cycles in the broadcast (i.e., the LCM of the relative frequency of the disks). Hence, the access
time for a clustered broadcast is:
initial probe time +waiting time before first desired frame arrives
waiting time for retrieving all desired frames in the broadcast
Note that in the above equation, based on Theorem 1, the expected number of data frames before the arrival of the
desired frames is
and the optimal number of replicated levels within a minor cycle can be derived from
Equation 2.
9 Note that it is different from the meta segments for a non-clustered flat broadcast, where frames with the same attribute value may be
scattered in several meta segments.
Since all the desired data frames are clustered in one minor cycle, the tune-in time is the same as in flat broadcast
for a clustered broadcast cycle, i.e.,
3.2 The Signature Technique
Signature methods have been widely used for information retrieval. A signature of a data frame is basically
a bit vector generated by first hashing the values in the data frame into bit strings and then superimposing them
together. The signature technique interleaves signatures with their associated data frames in data broadcasting
[LL96b].
To answer a query, a query signature is generated in a similar way as a data frame signature based on the query
specified by the user. The client simply retrieves information signatures from the broadcast channel and then
matches the signatures with the query signature by performing a bitwise AND operation. When the result is not
the same as the query signature, the corresponding data frame can be ignored. Otherwise, there are two possible
cases. First for every bit set in the query signature, the corresponding bit in the data frame signature is also set.
This case is called true match. Second the data frame in fact does not match the search criteria. This case is called
false drop. Obviously the data frames still need to be checked against the query to distinguish a true match from a
false drop.
The primary issue with different signature methods is the size and the number of levels of the signatures. The
access method for a signature scheme involves the following steps:
ffl Initial probe: The client tunes into the broadcast channel for the first received signature.
ffl Filtering: The client accesses the successive signatures and data frames to find the required data. On an
average, it takes half of a broadcast cycle for the client to get the first frame with the required attribute.
ffl Retrieve: The client tunes in to get the successive desired data frames from the channel.
k number of information frames indexed by an integrated signature
p number of bits in a packet
R the size (number of packets) of an integrated signature
Table
3. Parameter Setting for Signature Scheme
The number and the size of the signatures and the average false drop probability of the signatures 10 affect
tune-in time and access time. The average false drop probability may be controlled by the size of the signatures.
The initial probe time is related to the number of signatures interleaved with the data frames. Table 3 defines the
parameters for signature cost models. Estimation of the average false drop probability is given in the following
Lemma [LL96b]:
Given the size of a signature, R, the number of bit strings superimposed
into the signature, s, the average false drop probability for the signature is,
Each data frame may have different false drop probabilities. To simplify the cost model, we use average false drop probability to
estimate the access time and the tune-in time when a large number of queries are sampled (i.e., many data frames are retrieved).
In [LL96b], three signature algorithms, namely simple signature, integrated signature, and multi-level signa-
ture, were proposed and their cost models for access time and tune-in time were given. For simple signatures,
the signature frame is broadcast before the corresponding data frame. Therefore, the number of signatures is
equal to the number of data frames in a cycle. An integrated signature is constructed for a group of consecutive
frames, called a frame group. The multi-level signature is a combination of the simple signature and the
integrated signature methods, in which the upper level signatures are integrated signatures and the lowest level
signatures are simple signatures. Since the three signature algorithms have been extensively compared in the literature
[LL96b, HLL98a], we don't repeat the comparisons here. In the context of this study, simple signature is not
very efficient since it will be generated from only one attribute. Thus, we select the integrated signature method to
compare with the index tree method and the new index methods proposed later in this paper.
A Frame Group
Info
Frame
Info
Frame
Info
Frame
Info
Info
Frame
A Broadcast Cycle
Integrated Signature
Info
Frame
Frame
Figure
2. An Example of the Integrated Signature Technique
Figure
2 illustrates an integrated signature scheme. An integrated signature indexes all of the data frames between
itself and the next integrated signature. The integrated signature method is general enough to accommodate
both clustered and non-clustered data broadcast. For clustered data broadcast, a lot of data frames can be indexed
by one integrated signature. According to Lemma 1, the smaller the number of bit strings s superimposed into
an integrated signature, the lower the false drop probability. The integrated signature generated for a clustered
broadcast cycle has the effect of reducing the number of bit strings superimposed. To maintain a similar false drop
probability for a non-clustered broadcast cycle, the number of data frames indexed by an integrated signature may
be reduced. Determining the number of data frames for signature generation requires further study.
To simplify our discussion, we assume that frames with the same attribute value for an attribute a are evenly
distributed in each meta segment. Consequently, the number of frames with the same attribute value in each meta
segment is dS=Me, where the attribute a has a selectivity S and a scattering factor M . Let k be the number
of data frames indexed by an integrated signature. The number of distinct attribute values used for signature
generation, s, can be estimated as dk=dS=Mee. For frames in a meta segment, the average number of qualified
frames corresponding to a matched integrated signature, called locality of true matches l (1 - l - k), can be
estimated as, l = k=dk=dS=Meee, for frames which are randomly distributed over the file, l is equal to 1.
Flat Broadcast: Next, we derive the access time and the tune-in time for clustered and non-clustered broadcast
cycles. Let SIG be the average signature overhead for each data frame. Then, Once again, we
assume that the expected number of data frames before the arrival of the desired frames is C .
For clustered data broadcast, the access time can be derived as follows:
initial probe time +waiting time before first desired frame arrives
waiting time for retrieving all the desired frames in the broadcast
(R
and the tune-in time is:
true match frames in the broadcast
integrated signatures before the first desired frame
false drop frames before the first desired frame
For a non-clustered broadcast cycle, the access time is:
waiting time for the first desired frame to arrive
waiting time for retrieving all the desired frames in the broadcast
(R
and the tune-in time is:
true match frames in the broadcast
integrated signatures for the retrieval of all the desired frames
false drop frames for the retrieval of all the desired frame
According to Lemma 1, we have, P We differentiate Equation (4) or (5) with respect to R and
let @TUNE=@R equal zero. Then the optimal signature size (number of packets), -
R, can be computed as:
s
s
Broadcast Disks: For broadcast disks, the access time and the tune-in time can also be obtained by Equations
(3) and (4) respectively. Compared with flat broadcast, the difference is in the parameter C , i.e., for flat broadcast
broadcast disks
and
4 The Hybrid Index Approach
Both the signature and the index tree techniques have advantages and disadvantages in one aspect or the other.
For example, the index tree method is good for random data access, while the signature method is good for
sequentially structured media such as broadcast channels. The index tree technique is very efficient for a clustered
broadcast cycle, but the signature method is not affected much by clustering factor. While the signature method
is particularly good for multi-attribute retrieval, the index tree provides a more accurate and complete global view
of the data frames based on its indexed value. Since the clients can quickly search in the index tree to find out the
arrival time of the desired data, the tune-in time is normally very short. Since a signature does not contain global
information about the data frames, it can only help the clients to make a quick decision on whether the current
frame (or a group of frames) is relevant to the query or not. The filtering efficiency heavily depends on the false
drop probability of the signature. As a result, the tune-in time is normally high and is proportional to the length of
the broadcast cycle.
Data Block
a3
Sparse Index Tree
Signature of the following frame group
I
Data Block Data Block
Info
Frame
Info Info Info Info
Frame Frame Frame Frame Frame
Frame
Info Info
Info
Frame
A Broadcast Cycle
Figure
3. The Hybrid of Index Tree and Signature
In this section, we develop a new index method, called hybrid index, which builds on top of signatures a sparse
index tree to provided global view for data frames and their corresponding signatures. A key-search pointer node
in the sparse index tree points to a data block of consecutive frames and their corresponding signatures (refer to
Figure
3).
The index tree is called sparse tree because only upper t levels of the whole index tree are constructed. Ob-
viously, the sparse index tree overhead depends on t. The larger the t, the more precise location information the
sparse tree provides, and the higher the access time overhead. One extreme case is t equals h the number of the
whole index tree levels. The hybrid index evolves to the index tree method. On the other hand, if t equals zero,
the hybrid index method becomes the signature method.
To retrieve information, the client can search the sparse index tree to obtain the approximate location information
about the desired data frames. Since the size of the upper t levels of an index tree is usually small the overhead
for this additional index is very small.
Since the hybrid index technique is built on top of signature method, it retains all of the advantages that a
signature method has. However, the global information provided by the sparse index tree improves tune-in time
considerably. The general access method for retrieving data with this technique now becomes:
ffl Initial probe: The client tunes into the broadcast channel and determines when the next index tree arrives.
ffl Upon receipt of the index tree, the client accesses a list of pointers in the index tree to find out when
to tune into the broadcast channel to get to the nearest location where the required data frames can be found.
ffl Filtering: At the nearest location, a successive signature filtering is carried out until the desired data frames
are found.
ffl Retrieval: The client tunes into the channel and downloads all the required data frames.
4.1 Cost Model Analysis
Based on the above definition of the hybrid indexing method, we derive an estimates of the access time and the
tune-in time. The sparse index tree is the same as the replicated part of the index tree method. The average waiting
time for retrieving one data frame from the broadcast cycle with M meta segments can be expressed as:
where TREE and SIG are the index overheads of the index tree parts and the signature parts of a frame. The
average number of data frames in one data block D[B] can be calculated in a similar way as in the index tree
method, which is D=(M 1]). Thus, the total index tree and signature overheads in a data block are
respectively. Hence, the average initial probe time for the index tree is half of the
data block:
Flat Broadcast: For the clustered broadcast cycle with flat broadcast scheduling, the expected access time for
hybrid indexing method is:
and the expected number of data frames before the arrival of the desired frames C is F=2. If the
broadcast cycle is non-clustered, then there is one sparse index tree for each meta segment. Index tree technique
is applied in each meta segment. Hence, the expected access time is:
For both clustered and non-clustered broadcast cycle with flat broadcast scheduling, the tune-in time primarily
depends on the initial probe of the client to determine the next occurrence of the control index, the access time
for the index tree part which equals to the number of levels t of the sparse index tree, the tune-in time for the
data block B, the selectivity of a query S, and the successive access to M meta segments. Therefore, it is upper
bounded by:
where TUNEB is defined as the tune-in time for filtering data block B with the signature technique. It can be
estimated as follows:
every signature in half the length of the data block
data frames in half the length of the data block B
Broadcast Disks: For broadcast disks, the access time for hybrid indexing method can be obtained by Equation
(7) (i.e., for broadcast disks
and
The tune-in time of broadcast disks is the same as
that of flat broadcast for a clustered cycle (i.e., let Equation (8)).
Note: according to Equation (8), the tune-in time is proportional to M . Hence, the hybrid method is efficient
only for the broadcast cycle with small M . Actually, the sparse index tree introduces overhead for the non-clustered
broadcast cycle with large M . In this case, retrieval based on signatures can result in better tune-in
time. However, the hybrid method supports multi-attribute indexing very well [HLL98b]. For an attribute with
small scattering factor, a sparse index tree can be built to reduce the tune-in time. For an attribute with high
scattering factor, there is no need to build the sparse index tree and the client simply filters out the requested
data frames sequentially and ignores the sparse index tree. We extend the hybrid index with control information,
which includes the size of the sparse index tree and the size of the data block. When a query is specified on a
non-clustered attribute, this control information is used to direct the client to the beginning of the next data block.
Starting there, the client matches the signatures one by one for each data frame in that data block. Hence, the
access time for non-clustered information is the same as Equation (7). In order to skip each of these index trees,
we assume that the client needs to retrieve an index node to get information such as the size of the sparse index
tree and the size of the data block. Therefore, the tune-in time is:
where TUNE Sig is defined to be the tune-in time for the corresponding signature scheme used (i.e., integrated
signature in this paper).
5 Evaluation of Index Methods
In this section, we compare the access time, the tune-in time, and the indexing efficiency of the index tree,
the integrated signature, and the hybrid techniques. We also include the case where no index is used (denoted as
non-index) as a baseline for comparisons. Our comparisons are based on the cost models developed previously.
Orthogonal to the index method, frames can be broadcast based on broadcast disks or flat broadcast. Thus, there
are various combinations to be considered.
For flat broadcast, each data frame appears once in a given broadcast cycle. Therefore, the number of data
frames in the broadcast D equals the number of distinct frames F . For a clustered broadcast cycle (i.e.,
on average, half of a broadcast cycle needs to be scanned before the desired frames arrive (i.e.,
For broadcast disks (M=number of minor cycles), D is greater than F due to frame duplication in the broadcast
cycle. The access time and the tune-in time on different disks i may be different. We denote the average access
probability, the access time, and the tune-in time for frames on disks i as P i , Access i , and Tune i , respectively.
For disk i with frequency f i , the expected number of frames scanned before the arrival of the desired frames, C ,
is given by Theorem 1. Therefore, the estimates for the average access time and tune-in time are:
The study for a non-clustered broadcast cycle is especially important in multi-attribute indexing where cycle
can be clustered on at most one attribute, while query requests on other attributes get a reply via indexes built on
the non-clustered broadcast cycle. For a non-clustered broadcast cycle, M is greater than 1 and the client needs to
scan the entire broadcast to retrieve all the desired frames.
to 200
Table
4. Parameters of the cost models
Table
4 lists the parameter values used in the comparisons. Both access time and tune-in time are measured in
number of packets and are compared with respect to the number of distinct frames in a broadcast cycle which is
varied from 10 3 to 10 6 . We made the following assumption in the comparisons: a frame has capacity
packets and a tree node takes up packets which can contain search keys and pointers, the size of
a packet is are grouped together in an integrated signature, the index tree
is balanced (all leaves are on the same level) and each node has the same number of children. In order to make
comparison, the sparse tree levels t of the hybrid method is set to the same as the replicated tree levels in the index
tree method, which can be obtained via Equation (1).
A broadcast cycle with selectivity S ? 1 is logically equal to a broadcast cycle with selectivity and the
data frame size S times of the original broadcast cycle. Thus, in this paper, we only explore the case where the
query selectivity S is 1.
For broadcast disks, we assume that three disks are adopted (i.e., 3). The sizes of fast, medium, and slow
disk are, respectively, 1=10, 1=2:5, and 1=2 of the total number of frames and the relative spin speeds are 3, 2,
and 1. The aggregate client access probability for each disk is the same (i.e., P
each disk, all data frames have equal average access probability. Therefore, the average access probability for each
data frame is inversely proportional to the size of the disk where the data frame is located. For a non-clustered
broadcast cycle, we vary the scattering factor M (i.e., from 1 to 200) to examine its impact on the performance of
the index methods.
In what follows, we will first evaluate the access time, the tune-in time, and the indexing efficiency of index
methods for clustered broadcast cycle and then for non-clustered broadcast cycle. For the clustered broadcast
cycle, we consider both of the broadcast disks and flat broadcast as broadcast scheduling policies while for the
non-clustered broadcast cycle, we only consider the flat broadcast scheduling.
5.1 The Clustered Broadcast Cycle
In this section we study the access time and the tune-in time of the index methods for a clustered broadcast
cycle.
Figures
4 and 5 depict the access time and the tune-in time comparisons, where the y coordinate is in
logarithmic scale and the access time is the overhead with respect to non-index for broadcast disks scheduling.
First we consider the access time in Figure 4. The curves representing the access time overhead of the hybrid,
the signature, and the non-index methods (denoted as hybrid, sig, and non, respectively) overlap each other for
1.0 2.0 4.0 6.0 8.0 10.0
Access
Time
Number of Frames in Cycle (x 1e+05 Frames)
tree
tree BD sig
sig BD
hybrid
hybrid BD
non
Figure
4. Access Time Overhead Comparisons for Clustered Cycle
flat broadcast. Generally, amongst all broadcast scheduling and indexing methods, the non-index method with
broadcast disks gives the shortest access time which is proportional to the size of a broadcast cycle. For any
particular indexing methods, the access time for broadcast disks (denoted with BD in the figures) is always better
than that for flat broadcast because of the skewed client access pattern.
When we consider flat broadcast only, the access time for the signature and the hybrid methods is similar to
the non-index method as indicated by the overlapping curves in Figure 4 while the access time for the index tree
method gives an obviously worse access time. Compared with the non-index method, the index overhead for the
index methods (especially the signature and the hybrid methods) does not deteriorate the access time much for a
clustered broadcast cycle.
In the broadcast disks method, the broadcast cycle is longer than that scheduled in flat broadcast. Since the
longer the broadcast cycle, the higher the index overhead, all three index methods give a much worse access time
than the non-index BD. The signature method performs better than the hybrid and the tree methods. Since the
index tree is replicated in every minor cycle, its index overhead for broadcast disks is the highest. Thus, the
difference between the index tree method and the other two index methods for broadcast disks is much larger than
that for flat broadcast.
Next, we consider the tune-in time of the index methods. Figure 5 shows that the curves representing the index
tree method (denoted as tree) and the hybrid method are overlapping for both broadcast disks and flat broadcast.
The non-index methods give much worse results than the index methods. This suggests that indexing can improve
client tune-in time considerably. If we focus on the index methods only, the index tree method gives the best tune-in
time and the signature method has the worst tune-in time. Broadcast disks can also improve the tune-in time of
the index methods. As shown in Figure 5, the broadcast disks improve the tune-in time of the index methods and
such improvement for the non-index and the signature methods is more than for others.
In order to investigate the relationship between the tune-in time and the access time, we demonstrate in Figure
6 the indexing efficiency of indexing methods for various sizes of a broadcast cycle. The tune-in time saved and
the access time overhead is calculated with respect to the non-index method for broadcast disks and flat broadcast.
Intuitively, the larger the amount of tune-in time saved per unit access time overhead, the better the index methods.
We can observe that the amount of tune-in time saved per unit of access time overhead increases as the number of
frames in a cycle increases. The figure tells us that the signature method can give the largest amount of tune-in
time saved per unit of access time overhead and the index tree method gives the least amount of saving which is
much less than the other two methods. Indexing broadcast disks results in less amount of tune-in time saved than
1.0 2.0 4.0 6.0 8.0 10.0
Tune-in
Time
Number of Frames in Cycle (x 1e+05 Frames)
tree
tree BD sig
sig BD
hybrid
hybrid BD
non
non BD
Figure
5. Tune-in Time Comparisons for Clustered Cycle1010001.0 2.0 4.0 6.0 8.0 10.0
Indexing
Efficiency
Number of Frames in Cycle (x 1e+05 Frames)
tree
tree BD
sig
sig BD
hybrid
hybrid BD
Figure
6. Indexing Efficiency for Clustered Cycle
the indexing flat broadcast.
In conclusion, when a broadcast cycle is clustered by attributes, the hybrid scheme is the best when the access
time, the tune-in time, and indexing efficiency are considered. If only the tune-in time is considered, then the
index tree scheme shows the best performance. If we consider the indexing efficiency, then the signature is the
most efficient index method. Broadcast disks approach can improve both the access time and the tune-in time
when the client access patterns are skewed, although the improvement in the tune-in time is not as significant as
that in the access time.
5.2 The Non-Clustered Broadcast Cycle
In this section, we investigate the index methods for a non-clustered broadcast cycle (i.e., M ? 1). To examine
the influence of M on the system performance, we fix the number of frames in a cycle to 10 5 and vary M from
1 to 200. The access time overhead is obtained with respect to the non-index method. Figures 7 and 8 illustrate
the results. As expected, the scattering factor has great impact on the access time of the index tree method. Since
there is an index tree corresponding to every meta segment, as M is increased, the index tree overhead increases
Access
Time
Scattering Factor in Broadcast Cycle with 1e+05 Frames
index tree
signature hybrid
Figure
7. Access Time Overhead vs Scattering Factor48121620
Tune-in
Time
Scattering Factor in Broadcast Cycle with 1e+05 Frames
tree sig
hybrid
Figure
8. Tune-in Time vs Scattering Factor
rapidly. For the hybrid method, although there is a sparse index tree for each meta segment, the sparse index tree
overhead is very small and as M increases, the initial probe time for index tree node decreases. Therefore, M has
little influence on the access time in the hybrid method. As shown in Figure 8, the tune-in time of the index tree
and the hybrid methods goes up quickly as M is increased, while the tune-in time of the signature index method
remains the same. Since both the index tree and the hybrid methods need to probe each meta segment for the
possible arrival of the desired frames, the major advantage of the index tree and the hybrid methods, namely, short
tune-in time, disappears when M is greater than 33. However, there is no impact on the signature method for
both the access time and tune-in time when the scattering factor changes. This suggests that the index tree and the
hybrid methods are not applicable to a broadcast cycle with a large scattering factor.
Similar to the previous section, Figure 9 depicts the indexing efficiency with respect to different scattering
factors in a broadcast cycle. The tune-in time saved for the index tree is very low while the tune-in time saved for
the signature method is the highest.
Finally, we use the same parameter settings as in the clustered broadcast cycle case, but we assume that the
broadcast cycle is non-clustered with a scattering factor set to 100 (refer to Figures 10 and 11). The access time
Indexing
Efficiency
Scattering Factor in Broadcast Cycle with 1e+05 Frames
tree
sig
hybrid
Figure
9. Indexing Efficiency vs Scattering Factor1000001e+071e+091.0 2.0 4.0 6.0 8.0 10.0
Access
Time
Number of Frames in Cycle (x 1e+05 Frames)
tree
sig
hybrid
Figure
10. Access Time Comparisons for Non-Clustered Cycle
overhead is obtained with respect to that of the non-index. Similar to the clustered broadcast cycle, the access time
of the index tree method is much worse than that of the other two index methods. The signature method has the
closest access time to the non-index method. Since we assume that M is fixed at 100 for any broadcast length and
there is an index overhead for each meta segment, unlike the clustered cycle, the tune-in time of the index tree and
hybrid methods is not always better than that of the signature method. That is, for small broadcast cycle (i.e., less
than methods have the best tune-in time among the three methods. When the length of a
cycle increases, the tune-in time of the signature method increases quickly due to false drops and becomes worse
than the other methods again. As in the case of clustered cycle, the tune-in time of the hybrid method is always a
little bit worse than that of the index tree method. In Figure 12, we illustrate the indexing efficiency for various
cycle lengths. All index methods display similar interrelation to that in the case of clustered cycle.
For both clustered and non-clustered broadcast cycle, we observe that the tune-in time of the signature schemes
is proportional to the length of the broadcast cycle, while the other two methods have the tune-in time independent
of the length of the broadcast cycle. The reason is that the size of the index tree can be adjusted automatically
according to the length of the broadcast cycle F and the height of the index tree h increases very slowly (n h - F )
2.0 4.0 6.0 8.0 10.0
Tune-in
Time
Number of Frames in Cycle (x 1e+05 Frames)
tree sig
hybrid
Figure
11. Tune-in Time Comparisons for Non-Clustered Cycle110010000
1.0 2.0 4.0 6.0 8.0 10.0
Indexing
Efficiency
Number of Frames in Cycle (x 1e+05 Frames)
tree
sig
hybrid
Figure
12. Indexing Efficiency for Non-Clustered Cycle
and only h affects the tune-in time of the clients.
6 Related Works
The basic idea of constructing index on broadcast data was investigated by a number of projects [IVB94b,
IVB96, LL96b]. To reduce the power consumption of clients, [IVB94a, IVB96] proposed two methods, (1; d)
indexing and distributed indexing. In (1; d) indexing method, the index tree is broadcast d times during one
broadcast cycle. The full index tree is broadcast following every 1
d fraction of the file. All frames have an offset
to the beginning of the next index segment. The first frames of each index segment has a tuple, with the first field
as the attribute value of the record that was broadcast last and the second field as the offset to the beginning of the
next cycle. This is to guide the clients that have missed the required frames in the current cycle and have to tune
to the next cycle. Notice that there is no need to replicate the entire index between successive data segments, the
distributed indexing techniques was developed, it interleaves and replicates index tree with data, in the sence that
most frequently access index part (the upper level of the index tree) is replicated the number of times equal to the
number of children.
The project in [IVB94b] discussed the hashing schemes and a flexible indexing method for organizing broadcast
cycle. In the hashing schemes, instead of broadcasting a separate directory with the information frames, the
hashing parameters are included in the frames. Each frame has two parts: the data part and the control part. The
control part is the "investment" which helps guide searches to minimize the access time and the tune-in time. It
consists of a hash function and a shift function. The shift function is necessary since most often the hash function
is not perfect. In such a case there can be collisions and the colliding frames are stored immediately following the
frame assigned to them by the hashing function. The flexible indexing method first sorts the data in ascending (or
descending) order and then divides the cycle into p segments numbered 1 through p. The first frame in each of the
data segments contains a control part consisting of the control index. The control index is a binary index which ,
for a given key K , helps to locate the frame which contains that key. In this way, we can reduce the tune-in time.
The parameter p makes the indexing method flexible since depending on its value we can either get a very good
tune-in time or a very good access time.
[LL96b] investigated the signature techniques for flat data broadcasting. Three signature methods, simple
signature, integrated signature, and multi-level signature, were proposed and their cost models for the access time
and the tune-in time were given. Based on the models, they made comparisons for the performance of different
signature methods. Work in [LL96a] explored the influencies of caching signatures in the client side to the system
performance. Four caching strategies were developed and the tune-in time and the access time were compared.
With reasonable access time delay, all the caching strategies help in reducing the tune-in time for the two-level
signature scheme.
All those index methods can reduce the power consumption to some extent with a certain amount of access
overhead. However, the index techniques developed previously didn't consider the characteristics of skewed access
patterns.
Recent work in [CYW97] developed an imbalanced index tree on broadcast data. The index tree is constructed
in accordance with data access frequencies in such a way that the expected cost of index probes for data access is
minimized. In contrast to [IVB96], the variant fanouts for index nodes was also exploited. Since the cost of index
probes takes up small part of the overall cost, such imbalanced index tree gives only limited improvement.
To reduce the overall access time, as mentioned in the introduction section, Broadcast disks [ZFA94, AAFZ95]
is an efficient technique which can improve the overall access time for skewed data access patterns. In their later
work, [AFZ96b] studied the opportunistic prefetching from broadcast disks by the client, [AFZ96a] considered
the case when update presents in broadcast disks, and [AFZ97] studied the performance of a hybrid data delivery
in broadcast disks environments, where clients can retrieve their desired data items either by monitoring broadcast
channel (push-based) or by issuing explicit pull request to server (pull-based). These studies indicate that data
prefetching and hybrid data delivery with caching can significantly improve performance over pure pull-based
caching and pure push-based caching. While updates have no great influence on the system performance. [HV97]
further developed an O(log(n)) time-complexity scheduling algorithm which can determines the broadcast frequency
of each data item according to data access patterns for both single and multiple broadcast channels. In their
models, the length of data items is not necessarily of the same. However, no study explores index on broadcast
disks.
7 Conclusion and Future Work
In a mobile environment, power conservation of the mobile clients is a critical issue to be addressed. An efficient
power conservative indexing method should introduce low access time overhead, consume low tune-in time, and
produce high indexing efficiency. Moreover, an ideal index method should perform well under both clustered and
non-clustered broadcast cycle, with different broadcast scheduling policies, such as flat broadcast and broadcast
disks.
In this paper, we evaluate the performance of power conservative indexing methods based on index tree and
signature techniques. Combining strengths of the signature and the index tree techniques, a hybrid indexing
method is developed in this paper. This method has the advantages of both the index tree method and the signature
method and has a better performance than the index tree method. A variant of the hybrid indexing method has been
demonstrated to be the best choice for multiple attributes indexing organization in wireless broadcast environments
[HLL98b].
Our evaluation of the indexing methods takes into consideration the clustering and scheduling factors which may
be employeed in wireless data broadcast. Access time, tune-in time, and indexing efficiency are the evaluation
criteria for our comparisons. We develop cost models for access time and tune-in time of the three indexing
methods and produce numerical comparisons under various broadcast organization based on the formulae.
Through our comparisons for both clustered and non-clustered data broadcast cycles, we find that the index
tree method has low tune-in time only for the clustered broadcast cycles or the non-clustered broadcast cycles
with low scattering factor. The index tree always produces high access time overhead. For a broadcast cycle with
high scattering factor, the signature method is the best choice. Since the signature method needs further filtering
to determine whether a data item really satisfies a query, the tune-in time for signature methods may be high.
However, the variations of data organization for the broadcast channels have very limited impact on performance
of the signature method. Moreover, the access time overhead is low. The hybrid method has the advantages of
both the index tree method and the signature method. It performs well for clustered broadcast cycles or non-clustered
cycles with low scattering factor (i.e., low tune-in time similar to the index tree method and low access
time overhead similar to the signature method). If we only consider the indexing efficiency, the signature method
has the best performance for various broadcast organization.
Finally, through our comparisons for flat broadcast and broadcast disks, we observe that broadcast disks can
reduce the access time for any index methods and the tune-in time for the signature and non-index methods.
As a related study [HLL98b], we have studied the performance of multi-attribute index methods for wireless
broadcast channels. Since the access time and the tune-in time of the index methods may be different for queries
based on different attributes, We have estimated the average access time and tune-in time of the client according
to the queries arrival rate for each attribute.
In the future, we plan to incorporate the index schemes with data caching algorithms to achieve an improved
system performance and obtain a better understanding of the wireless broadcast systems.
Appendix
Broadcast disks were proposed to improve data access efficiency [AAFZ95]. The idea is to divide data frames
to be broadcast into broadcast disks based on their access frequency and then interleave data frames on these disks
into an information stream for broadcast. This imitates multiple disks each spinning at a different speed. The
relative speeds of disks are differentiated by the number of broadcast units 11 on the disks. Data located on a disk
with less broadcast units is scheduled for broadcast more frequently than a disk with many broadcast units. The
relative speeds and broadcast frequency of broadcast disks inversely proportional to the number of broadcast units
on those disks. Thus, data frames with higher demands usually are placed on a higher speed broadcast disk.
The broadcast units on broadcast disks, called chunks, have equal size 12 . The broadcast schedule is generated
by broadcasting a chunk from each disk and cycling through all the chunks sequentially over all the disks. A minor
cycle is defined as a sub-cycle consisting of one chunk from each disk. Consequently, chunks in a minor cycle are
repeated only once and the number of minor cycles in the broadcast equals the least common multiple (LCM) of
the relative frequency.
Unlike traditional disks where the number and the capacity of the disks are fixed by hardware, The broadcast
disks has flexibility in deciding the number, the size, the relative spinning speed, and the data frame placement of
broadcast unit may consist of one or many data frames, e.g., a cluster of data frames with the same attribute value.
In real implementation, chunks can be replaced by variable-sized data frames or a group of data frames.
each disk. Broadcast schedules can be programmed once the data frames, relative speed of each disk, the number
of data frames placed on the disks, and the size of each disk are determined.
c
Chunks
Data Set
Fast
Disks Slow
COLD
a
a
a b
f
A Broadcast Cycle a g
Minor Cycle
b d a c e a b f a c
Figure
13. An Example of a Seven-page, Three-disk Broadcast Program
Figure
13 illustrates an example where seven chunks are divided into three ranges of similar average access
probabilities [AAFZ95]. Each of which will be assigned to a separate disk in the broadcast. In the figure, chunk,
refers to the j th chunk of disk i. Chunks in the first disk are to be broadcast twice as frequently as chunks in
the second one and four times as often as those of the slowest disk.
However, the reason that multi-disk broadcast can achieve better performance than a random broadcast schedule
and the expected access time for retrieving data frames from the broadcast disks were not given in [AAFZ95]. In
the following, we prove Theorem 1 used in the paper.
PROOF: The data frame d is scheduled to broadcast f i times in a cycle, the length of the broadcast (in number of
frames) is D:
Assuming the inter-arrival time for each broadcast of d is D is the number of frames
between two consecutive copies of d. k is the number of frames between the frame where clients begin monitoring
channels and the next copy of d. Therefore, the expected access time for d can be estimated as:D \Delta
(D
will have a minimum
value. Note that D is the number of frames from one broadcast of d to the next broadcast of d. If all the f i
broadcasts of d are equally spaced, then we have As a result, the minimum expected number
of data frames retrieved before the desired one arrives for data on disk with frequency f i , denoted as C , can be
expressed as follows.
--R
Broadcast disks: Data management for asymmetric communications environments.
Dissemination updates on broadcast disks.
Prefetching from a broadcast disk.
Balancing push and pull for data broadcast.
Sleepers and workaholics: Caching strategies for mobile environments.
Indexed sequential data broadcasting in wireless mobile comput- ing
A comparison of indexing methods for data broadcast on the air.
Optimal channel allocation for data dissemination in mobile computing environments.
Efficient algorithms for a scheduling single and multiple channel data broadcast.
Energy efficiency indexing on air.
Power efficiency filtering of data on air.
Data on the air - organization and access
On signature caching of wireless broadcast and filtering services.
Using signature techniques for information filtering in wireless and mobile environments.
Adaptive data broadcast in hybrid networks.
Scheduling data broadcast in asymmetric communication environ- ments
Are disks in the air' just pie in the sky?
--TR
Power efficient filtering of data on air
Sleepers and workaholics
Energy efficient indexing on air
Broadcast disks
Balancing push and pull for data broadcast
Efficient indexing for broadcast based wireless systems
Signature caching techniques for information filtering in mobile enviroments
A study on channel allocation for data dissemination in mobile computing environments
Data on Air
Prefetching from Broadcast Disks
Disseminating Updates on Broadcast Disks
Adaptive Data Broadcast in Hybrid Networks
A Comparision of Indexing Methods for Data Broadcast on the Air
Indexed Sequential Data Broadcasting in Wireless Mobile Computing
Optimal Channel Allocation for Data Dissemination in Mobile Computing Environments
Efficient Algorithms for Scheduling Single and Multiple Channel Data Broadcast
Scheduling Data Broadcast in Asymmetric Communication Environments
--CTR
Quinglong Hu , Wang-Chien Lee , Dik Lun Lee, Indexing techniques for wireless data broadcast under data clustering and scheduling, Proceedings of the eighth international conference on Information and knowledge management, p.351-358, November 02-06, 1999, Kansas City, Missouri, United States
Qingzhao Tan , Wang-Chien Lee , Baihua Zheng , Peng Liu , Dik Lun Lee, Balancing performance and confidentiality in air index, Proceedings of the 14th ACM international conference on Information and knowledge management, October 31-November 05, 2005, Bremen, Germany
Jianting Zhang , Le Gruenwald, Prioritized sequencing for efficient query on broadcast geographical information in mobile-computing, Proceedings of the 10th ACM international symposium on Advances in geographic information systems, November 08-09, 2002, McLean, Virginia, USA
Qinglong Hu , Dik Lun Lee , Wang-Chien Lee, Performance evaluation of a wireless hierarchical data dissemination system, Proceedings of the 5th annual ACM/IEEE international conference on Mobile computing and networking, p.163-173, August 15-19, 1999, Seattle, Washington, United States
Jianliang Xu , Wang-Chien Lee , Xueyan Tang, Exponential index: a parameterized distributed indexing scheme for data on air, Proceedings of the 2nd international conference on Mobile systems, applications, and services, June 06-09, 2004, Boston, MA, USA
KwangJin Park , MoonBae Song , Chong-Sun Hwang, Adaptive data dissemination schemes for location-aware mobile services, Journal of Systems and Software, v.79 n.5, p.674-688, May 2006
Jianting Zhang , Le Gruenwald, Efficient placement of geographical data over broadcast channel for spatial range query under quadratic cost model, Proceedings of the 3rd ACM international workshop on Data engineering for wireless and mobile access, September 19-19, 2003, San Diego, CA, USA
Kwang-Jin Park , Moon-Bae Song , Chong-Sun Hwang, Broadcast-based spatial queries, Journal of Computer Science and Technology, v.20 n.6, p.811-821, November 2005
Wen-Chih Peng , Ming-Syan Chen, Efficient channel allocation tree generation for data broadcasting in a mobile computing environment, Wireless Networks, v.9 n.2, p.117-129, March
Jianliang Xu , Dik-Lun Lee , Qinglong Hu , Wang-Chien Lee, Data broadcast, Handbook of wireless networks and mobile computing, John Wiley & Sons, Inc., New York, NY, 2002
Sunil Prabhakar , Yuni Xia , Dmitri V. Kalashnikov , Walid G. Aref , Susanne E. Hambrusch, Query Indexing and Velocity Constrained Indexing: Scalable Techniques for Continuous Queries on Moving Objects, IEEE Transactions on Computers, v.51 n.10, p.1124-1140, October 2002
Chi-Yin Chow , Hong Leong , Alvin T. S. Chan, Distributed group-based cooperative caching in a mobile broadcast environment, Proceedings of the 6th international conference on Mobile data management, May 09-13, 2005, Ayia Napa, Cyprus | index technique;wireless data broadcast;power conservation;signature method |
607200 | Parallel Mining of Outliers in Large Database. | Data mining is a new, important and fast growing database application. Outlier (exception) detection is one kind of data mining, which can be applied in a variety of areas like monitoring of credit card fraud and criminal activities in electronic commerce. With the ever-increasing size and attributes (dimensions) of database, previously proposed detection methods for two dimensions are no longer applicable. The time complexity of the Nested-Loop (NL) algorithm (Knorr and Ng, in Proc. 24th VLDB, 1998) is linear to the dimensionality but quadratic to the dataset size, inducing an unacceptable cost for large dataset.A more efficient version (ENL) and its parallel version (PENL) are introduced. In theory, the improvement of performance in PENL is linear to the number of processors, as shown in a performance comparison between ENL and PENL using Bulk Synchronization Parallel (BSP) model. The great improvement is further verified by experiments on a parallel computer system IBM 9076 SP2. The results show that it is a very good choice to mine outliers in a cluster of workstations with a low-cost interconnected by a commodity communication network. | Introduction
Data mining or knowledge discovery tasks can be classified into four general categories: (a)
dependency detection (e.g. association rules [1]) (b) class identification (e.g. classification, data
clustering [6, 14, 17]) (c) class description (e.g. concept generalization [7, 11]), and (d) excep-
tion/outlier detection [12, 13]. Most research has concentrated on the first three categories while
most of the existing work on outliers detection has lied in the field of statistics [2, 8]. Although
The author is currently in the Department of Computer Science at the University of Maryland, College Park,
but the work in this paper was done when he was at the University of Hong Kong.
Hung and Cheung
outliers have also been considered in some existing algorithms, they are not the main target and
the algorithms only try to remove or tolerate them [6, 14, 17].
In fact, the identification of outliers can be applied in the areas of electronic commerce,
credit card fraud detection, analysis of performance statistic of professional athletes [10] and
even exploration of satellite or medical images [12]. For example, in a database of transactions
containing sales information, most transactions would involve a small amount of money and
items. Thus a typical fault detection can discover exceptions in the amount of money spent,
type of items purchased, time and location. As a second example, satellite nowadays can be used
to take images on the earth using visible lights as well as electromagnetic waves to detect targets
such as potential oil fields or suspicious military bases. Detection of exceptional high energy or
temperature or reflection of certain electromagnetic waves can be used to locate possible targets.
A simple algorithm called Nested-Loop algorithm (NL) has been proposed in [13], which,
however, has a complexity of O(kN 2 ), (k is the number of dimensions and N is the number of
data objects), and the number of passes over the dataset is linear to N. By real implementation
and performance studies, we find that the major cost is from the calculation of distances between
objects. Though NL is a good choice when the dataset has high dimensionality, the large number
of calculations makes it unfavorable. A cell-based algorithm has been proposed in [13]. It needs
only at most three dataset passes. However, it is not suitable to high dimensions because its time
complexity is exponential to the number of dimensions. NL always outperforms the cell-based
algorithm when there are more than four dimensions [13]. In this paper, we will improve NL for
high dimensional dataset, which is very common in data warehouse.
One approach to improve the NL algorithm is to parallelize it. In this paper, the definition
of outliers and the original NL is described in the next few subsections in this introduction.
NL is improved to reduce the number of calculations. The resulted algorithm, ENL, is given in
Section 2. In Section 3, ENL is parallelized to further reduce the execution time in a shared-nothing
system. In Section 4, the performance and improvement are analyzed by using Bulk
Synchronization Parallel (BSP) model. After that, performance studies are given in Section
5. Finally, we give a discussion and related works in Section 6. This paper mainly focuses on
identification of distance-based outliers, although our parallel algorithm can also be modified
to perform the most expensive step of finding density-based outliers [4]. Related works on
density-based outliers are described in Section 6.
1.1 Distance-Based Outliers
As given in [13], the definition of an outlier is as the following:
Given parameter p and D, an object O in a dataset T is a DB(p; D)-outlier if at least
fraction p of the objects in T lies greater than distance D from O.
From the definition, the maximum number of objects within distance D of an outlier O is
is the number of objects. Let F be the underlying distance function
that gives the distance between any pair of objects in T . Then, for an object O, the D-
Parallel Mining of Outliers in Large Database 3
neighbourhood of O contains the set of objects Q 2 T that are within distance D of O (i.e.
This notion of outliers is suitable for any situation in which the observed distribution does not
fit any standard distribution. Readers are referred to [12] for the generalization of the notions
of distance-based outliers supported by statistical tests for standard distributions. Works on
density-based outliers are described in Section 6.
1.2 Assumption and Notation in NL Algorithm
Algorithm NL [13] is a block-oriented, nested-loop design. Here a block can involve one or more
disk I/O, i.e. it may be necessary to take one or more disk I/O to read in a block. In this paper,
a page needs one disk I/O access only. Thus, the total number of pages in a particular dataset
is constant, but the total number of blocks in that dataset will change with the size of a block.
NL algorithm is designed for a uniprocessor system with one local memory and one local
disk. The effect of cache is insignificant here. We make some assumptions here that are generally
acceptable in real systems: there is no additional disk buffer from the operating system beside
the buffer we will use in the algorithm; the disk access is sequential.
Let n be the number of blocks in the dataset T , k be the dimensionality, N be the number of
objects in the dataset, P be the number of pages contained in a block, so the number of objects
in a page is
let the time of accessing a page from the disk be t I=O , and the time of
computing the distance between 2 objects be t comp , which is linear to the dimensionality.
1.3 Original NL
The original NL algorithm from [13] with some clarification is described here.
Assume that the buffer size (for storing dataset) is B% of the dataset size. Then the buffer
is first divided into 2 equal halves, called the first and second arrays. The dataset is read into
the first or second array in a predefined order. For each object t in the first array, distance
between t and all the other objects in the arrays are directly computed. A count of objects in
its D-neighbourhood is maintained.
Algorithm NL
1. fill the first array (of size B% of dataset) with a block of objects from T
2. for each object t in the first array, do:
(a) count the number of objects in first array which are close to t (distance - D); if the count
as a non-outlier, where
3. repeat until all blocks are compared to first array, do:
(a) fill the second array with another block (but save a block which has never served as the first
array, for last)
4 Hung and Cheung
(b) for each unmarked object t in the first array, do:
i. increase its count by the number of objects in second array close to t (distance - D);
if the count ? M , mark t as a non-outlier
4. report unmarked objects in the first array as outliers
5. if second array has served as the first array before, stop; otherwise, swap the names of first and
second arrays and repeat the above from step 2.
The time complexity is stated as O(kN 2 ) in [13], where k is the dimensionality and N is
the number of objects in the dataset. The disk I/O time is considered briefly only in [13].
In fact, both the CPU time and I/O time should be considered. The detailed analysis of the
computation time and disk I/O time is given below.
For computation time, the total time for calculation of distance between pairs of objects
has an upper bound of N 2 t comp , i.e. the number of calculations of distance is quadratic to the
number of objects in the dataset. The actual number of calculations depends on the distribution
of data, the location of data in the blocks, the distance D, and the number M , which in turn
depends on the fraction p. Since in usual case, the number of outliers should be small, so M is
small. No more calculation for a particular object t will be done if its count exceeds M . As a
result, the actual number of calculations is much less than N 2 , usually, within half, as shown in
Section 5.
For disk I/O time, since the dataset is divided into
blocks, the total number of
block reads is n+(n\Gamma2)(n\Gamma1), and the number of passes over the dataset is n
. The total number of page reads is nP 1)P . It is noted that P is directly
proportional to the buffer size. With a fixed buffer size, n is directly proportional to N . So the
, which
has a complexity of O(N 2 ).
Example 1
The following is an example.
Consider 50% buffering and 4 blocks of the dataset denoted as A, B, C, D, i.e. each block contains4
of the dataset. The order of filling the arrays and comparing is as the following:
1. A with A, then with B, C, D, for a total of 4 blocks reads;
2. D with D (no read required), then with A (no read), B, C, for a total of 2 blocks reads;
3. C with C, then with D, A, B, for a total of 2 blocks reads;
4. B with B, then with C, A, D, for a total of 2 blocks reads.
The table below shows the order of the blocks loaded, and the blocks staying in the two arrays of
the buffer. Each row shows a snapshot after step 1 or step 3 (a).
Parallel Mining of Outliers in Large Database 5
Block Buffer
disk I/O order A B C D Array 1 Array 2
6 L * D C
9
buffer (unchanged)
The total number of disk I/O is n blocks. The total
number of dataset passes is4
2:5.
Enhanced NL
In NL, there are redundant block reading and comparison. In this section, a new order is
proposed which results in reduced computation time and disk I/O time. The arrangement is
that: in each turn, the blocks are read into the second array in a predefined order until the end
of the series of ready blocks is reached, then the block in the first array is marked as done, the
names of the two arrays are swapped and the order is reversed. The above is repeated until all
blocks are done. The resultant is Enhanced NL (ENL) Algorithm, which is described below.
For each object in the dataset, there is a count.
Algorithm ENL
1. label all blocks as "ready" (the block is either in a "ready" or a "done" state)
2. fill the first array (of size
B% of dataset) with a block of objects from T
3. for each object t in the first array, do:
(a) increase the count by the number of objects in first array which are close to t (distance
- D); if the count ? M , mark t as a non-outlier
4. set the block-reading order as "forward"
5. repeat until all "ready" blocks (without marked as done) are compared to first array in the
specified block-reading order, do:
(a) fill the second array with next block
(b) for each object t i in the first array, do:
i. for each object t j in the second array, if object t i or t j is unmarked, then if
6 Hung and Cheung
A. increase count i and count j by 1; if count i ? M , mark t i as a non-outlier,
proceed to next t as a non-outlier
6. report unmarked objects in the first array as outliers
7. if second array is marked as done, stop; otherwise, mark the block in the first array as
done, reverse the block-reading order, swap the names of first and second arrays and repeat
the above from step 3.
Although the time complexity of ENL is still O(kN 2 ), where k is the dimensionality and N
is the number of objects in the dataset, the cost of computation and disk I/O are both reduced
compared with those of NL.
For computation time, the upper bound of the total time for calculation of distance between
pairs of objects which is still linear to the
dimensionality. In original NL algorithm, only the count of the object in first array is updated.
However, in ENL, the counts of both objects are updated for each comparison. Thus, the upper
bound of the number of calculations of distances is reduced to almost half (compared with NL).
For the actual reduction of the number of calculations, once again, it depends on the distribution
of data, the locations of data in the blocks, the distance D, and the number M , which in turn
depends on the fraction p. The comparions of performance of NL and ENL is shown in Section
5.
For simplicity, the upper bound of number of distance calculations between the objects in
the same block was said to be (NP P ) 2 but in fact it only needs (NP P )(NP
times.
For disk I/O time, if the dataset is divided into
blocks, then the total number of
block reads is
and the number of passes over the
dataset is
n. The total number of page reads is
With a fixed buffer size, n is directly proportional to N . So the disk I/O time
, which has a complexity of
nearly half of that of NL.
Example 2
Example 1 is extended here to illustrate ENL.
Consider 50% buffering and 4 blocks of the dataset denoted as A, B, C, D, i.e. each block contains4
of the dataset.
The order of filling the arrays and comparing is as the following:
1. A with A, then with B, C, D, for a total of 4 blocks reads;
2. D with D (no read required), then with C, B for a total of 2 blocks reads;
Parallel Mining of Outliers in Large Database 7
3. B with B, then with C, for a total of 1 blocks reads;
4. C with C, for a total of 0 blocks reads.
The table below shows the order of the blocks loaded, and the blocks staying in the two arrays of
the buffer. Each row shows a snapshot after step 2 or step 5 (a).
Block Buffer
disk I/O order A B C D Array 1 Array 2
in buffer (unchanged)
The total number of disk I/O is 1 blocks. The total number of dataset passes is4
1:75.
3 Parallel ENL
PENL (Parallel ENL) is a parallel version of ENL, running in a shared-nothing system. Actually,
when running in a processor, PENL is almost reduced to ENL. Most of block reading in ENL is
replaced by transfer of blocks among processors through the communication network. Its major
advantage is distributing the costly computations nearly evenly among all processors.
3.1 Assumptions and Notation
To extend the ENL to PENL, the assumptions and notations are extended. For simplicity, it
is assumed that in the shared-nothing system, each node has only one processor. Each node
has its own memory and local disk. The dataset is distributed equally in size to the local disk
of each node without overlapping. Communication is done by message passing. The network
architecture is designed such that each node can send message and receive message at the
same time. (For simplicity of analysis in later section, we make the previous assumption. This
requirement is not strict. At least it is required that internode communication is possible among
nodes.) Besides, the nodes are arranged in a logical ring so that each node has two neighbour
nodes. Logical arrangement means that physically the network can be in other architecture, e.g.
bus, which does not affect the effectiveness of our algorithm, but only the performance.
Let n be the number of blocks in the dataset, nP be the total number of pages of the dataset,
k be the dimensionality, N be the number of objects in the dataset, P be the number of pages
contained in a block, so the number of objects in a page is
, and the number
8 Hung and Cheung
of objects in a block is N
. Let the time of computing the distance between 2 objects
be t comp , the time of internode communication between 2 nodes to transfer a page of data be
t comm , and the time of accessing a page from the local disk be t I=O . Let p be the number of
processors or nodes, m be the size of local memory used for disk buffer.
To simplify the algorithm and analysis, it is assumed that the local memory buffers are all
the same size for all nodes, and so are the data in local disk; and the number of data blocks in a
local disk is an integer, i.e. number of pages in dataset (nP ) is a multiple of the product of the
number of processors and the number of pages contained in a block (pP ).
3.2 Algorithm
Each node has part of the dataset in its local disk. The number of pages of local data is nP
The number of objects in local data is N
. Each node has a local memory of size m, which is
divided into 3 arrays. Each array can contain a block with the size of P pages. The first and
second arrays function similarly to those in ENL, while the third array is used as a temp buffer
to store data received from a neighboring node. Besides, a count of objects in D-neighbors for
each object is maintained.
PENL is modified from ENL. The basic principle is: in a node, each time after a block is read
from the node's local disk and the distance calculations are all done, then the block is transferred
to the node's neighbor node and distance calculations are done using the block received from
the node's another neighbour node. This is repeated until the block has passed all neighbors,
then the node reads another block from the local disk and repeats the above again. Most of
disk I/O operations are replaced by relatively fast internode communication. The huge number
of calculations are now distributed by all nodes, which greatly reduces the execution time, i.e.
the response time.
Algorithm PENL(node id x)
1. label all blocks as "ready" (the block is either in a "ready" or a "done" state)
2. fill the first array with a block of objects
3. set the block-reading order as "forward"
4. set counter b to 0, set counter s to 0
5. repeat until all "ready" blocks are compared to first array in the specified block-reading order,
do:
(a) set counter c to 0
for each object t in the first array, do:
i. increase the count by the number of objects in first array which are close to t (distance
- D); if the count ? M , mark t as a non-outlier
(c) if set b to 1 and go to step (f)
Parallel Mining of Outliers in Large Database 9
(d) if b 6= 0, fill the second array with the next block
for each object t i in the first array, do:
i. for each object t j in the second array, if object t i or t j is unmarked, then if dist(t
A. if increase count i by 1, otherwise, increase count i and count j by 1; if
count as a non-outlier
reverse the order of execution of steps (g) and (h)
send the data in the first array to the neighbor node
send the data in the second array to the neighbor node
receive the data from the neighbor node store it in the temp buffer (third
array)
(i) increment counter c by 1, swap the names of the second array and third array; if counter
continue the iteration in step 5, otherwise go to step 5(e)
6. if second array is marked as done, report unmarked objects in the first array as outliers; otherwise,
mark the block in the first array as done, reverse the block-reading order, swap the names of first
and second arrays and repeat the above from step 4.
The time analysis of PENL is given below briefly. Later a detailed analysis will be given
using the Bulk Synchronous Parallel (BSP) model.
For each node, the operations are similar to ENL, except that each block is transferred to
other nodes after computations are done on it. Thus, in brief, the upper bound of computation
time in each node is@
.
Each node has n
blocks of local data. The local disk I/O time is the same as that of ENL
executing with n
blocks of data, so the disk I/O time for each node is
For the internode communication time in a node, it is
It is obvious that the upper bound of the computation time is linear to the reciprocal of the
number of processors, the internode communication time decreases with increasing number of
Hung and Cheung
processors, the disk I/O time is quadratic to the reciprocal of the number of processors. Please
note that P changes with the size of buffer. If the total size of all local memory is fixed, i.e. pP
is a constant, then the internode communication time in each node only varies with the dataset
size, but not the number of processors, while the computation time and local disk I/O time
in each node is linear to the reciprocal of the number of processors. Although only the upper
bound of the computation time is shown, actually the calculations are distributed quite evenly
among all the nodes, and so it is nearly linear to the reciprocal of the number of processors, as
shown in the performance studies in Section 5.
3.3 Example
The following gives an example of execution of PENL on a dataset of 16 blocks using four nodes. Each
node has four local blocks because the 16 blocks are evenly distributed in the four nodes. The four local
blocks of node x are denoted as Ax ; Bx ; Cx ; Dx .
The order of filling the arrays by disk I/O or internode communication and comparing in node 0 is
as the following:
1. A0 with A0 (I/O),
(3 communications), for a total of 4
blocks reads and 12 communications;
2. D0 with D0 (no read required),
nications), B0 (I/O), B1 ; B2 ; B3 (3 communications), for a total of 2 blocks reads and 9 communications
3. B0 with B0 (no read required), B1
nications) for a total of 1 blocks reads and 6 communications;
4. C0 with C0 (no read required), C1 ; C2 ; C3 (3 communications) for a total of 0 blocks reads and 3
communications.
For each node, the number of disk I/O is 1 blocks. The number of dataset passes is4
1:75; the internode communication is 3(4 times.
The table in Appendix A shows the details: the order of the blocks loaded in node 0 , the blocks
transferred to and from node 0, and the blocks staying in the two arrays of the buffer of node 0.
If we run ENL using a single node with the same amount of memory as that in a node in
PENL, then the total number of disk I/O will be
blocks. The total
number of dataset passes is 121
7:5625. The ratio of disk I/O in ENL to that in PENL
using 4 nodes = 7:5625
4:32. The improvement is very significant. However, if we give the
ENL the amount of memory same as the total sum of memory of all nodes, then we do not gain
any benefit in the disk I/O because the size of a block is larger now, and so the total number
of blocks will be much less than 16. Nevertheless, we still have a significant improvement in
performance because the computation time, which is the major cost, is nearly evenly distributed
by other nodes in PENL, as shown in Section 5.
Parallel Mining of Outliers in Large Database 11
3.4 Optimization
We can do the following optimization on PENL. For the outer iterations in step 5 (5(a) to 5(i))
with 1, we can only do approximately the first half of all inner iterations (5(e) to
5(i)) to reduce redundancy of block transmission and computation. For the example in Section
3.3, the inner iterations for the computations of blocks A 0 and A 3 , D 0 and D 3
and C 3 are skipped. Then the upper bound of the cost of computation of that outer iteration can
be reduced from p(NP P
'-
Therefore the upper bound
of the total computation cost is reduced from
Therefore the computation-cost-reduction-ratio (the ratio of reduction of upper bound of
computation cost by the optimization of PENL
(p\Gamma1)k2
!n
is the number of blocks in each node. For the example in Section 3.3,
so the reduction is 0:1. In practice, we have a large number of local blocks, so the
reduction will not be significant.
In this paper, we will refer to the original PENL algorithm, unless specified, for the simplicity
of implementation and analysis of the parallel algorithm
Theoretical Analysis using BSP Model
Before studying the performance of real execution of the algorithms, the theoretical analysis
is given here. The BSP (Bulk Synchronous Parallel) model [3] is used to analyze the PENL
algorithm because the hardware and software characteristics of the model match with PENL's
platform requirement and working principle.
A BSP computer consists of a set of processor-memory pairs, a communication network that
delivers messages in a point-to-point manner, and a mechanism for the efficient barrier synchronization
of the processors. The BSP computer is a two-level memory model, i.e. each processor
has its own physically local memory module and all other memory is non-local and accessible in
a uniformly efficient way. PENL requires each node to have a local memory buffer. The accesses
of other blocks in the buffers of other nodes are done by synchronous communication. Block
transfers are done in a node-to-node manner.
The BSP computer operates in the following way. A computation consists of a sequence of
parallel supersteps, where each superstep is a sequence of steps, followed by a barrier synchronization
at which point any non-local memory accesses take effect. PENL requires a barrier
synchronization for block transfers.
12 Hung and Cheung
During a superstep, each processor has to carry out a set of programs or threads, and it
can do the following: (i) perform a number of computation steps, from its set of threads, on
values held locally at the start of the superstep; (ii) send and receive a number of messages
corresponding to non-local read and write requests. During each superstep, PENL performs
computations of items in one or two blocks, accesses disk for loading a new block, and executes
block transmissions.
As a simple model that bridges hardware and software, BSP model provides portability across
diverse platforms with predictable efficiency. It can be seen that the model is very suitable for
PENL because PENL has coarse granularity and each superstep consists of a lot of distance
calculations followed by message passing.
4.1 Cost Analysis
Define the following variables:
L: barrier, synchronization cost
d: ratio of the time (cost) of local disk I/O accessing an object
i:e: time of local disk I=O accessing a page
to the time of computation on a distance between
two objects
g: ratio of the time of internode communication of transferring an object
i:e:
time of internode communication of transferring a page
to the time of computation
on a distance between two objects
Costs and barriers are added in the PENL algorithm, as shown below:
Algorithm PENL(node id x)
1. label all blocks as "ready" (the block is either in a "ready" or a "done" state)
2. fill the first array with a block of objects
pages in a block, NP objects in a page)
3. set the block-reading order as "forward"
4. set counter b to 0
5. repeat until all "ready" blocks are compared to first array in the specified block-reading order,
do:
(a) set counter c to 0, set counter s to 0
for each object t in the first array, do:
i. increase the count by the number of objects in first array which are close to t (distance
- D); if the count ? M , mark t as a non-outlier
Parallel Mining of Outliers in Large Database 13
(c) if set b to 1 and go to step (f)
(d) if b 6= 0, fill the second array with the next block,
for each object t i in the first array, do:
i. for each object t j in the second array, if object t i or t j is unmarked, then if dist(t
A. if increase count i by 1, otherwise, increase count i and count j by 1; if
count as a non-outlier
(f) set a barrier, if reverse the order of execution of steps (g) and (h)
send the data in the first array to the neighbor node
send the data in the second array to the neighbor node
receive the data from the neighbor node store it in the temp buffer (third
array)
(i) set a barrier, increment counter c by 1, swap the names of the second array and third
continue the iteration in step 5, otherwise go to step
6. if second array is marked as done, report unmarked objects in the first array as outliers; otherwise,
mark the block in the first array as done, reverse the block-reading order, swap the names of first
and second arrays and repeat the above from step 4.
Therefore the total costs of the algorithm is
The derivation can be found in Appendix B.
The first term is computation, the second one is disk I/O, the third one is communication,
the last one is synchronization. Please notice that the time of computation is only a upper
bound. Therefore this theoretical analysis does not give a reliable value of the actual execution
time, but it still acts as a good reference for the comparison with ENL later.
When the block size, the page size and object size are constant, then it can be found that,
if the dataset size is large (N AE pNP
, i.e. local block number is large),
ffl the computation cost is quadratic to the dataset size and linear to the reciprocal of the
number of processors;
ffl the disk I/O cost is quadratic to the dataset size and quadratic to the reciprocal of the
number of processors;
14 Hung and Cheung
ffl the communication cost is quadratic to the dataset size and linear to the reciprocal of the
number of processors;
ffl the synchronization cost is quadratic to the dataset size and linear to the reciprocal of the
number of processors.
Please note that P changes with the size of buffer. If the total size of all local memories is
fixed, i.e. pP is a constant, then all costs are still quadratic to the dataset size if the dataset size
is large (N AE pNP
, i.e. local block number is large); besides,
ffl the computation cost is still linear to the reciprocal of the number of processors;
ffl the disk I/O cost is now linear to the reciprocal of the number of processors;
ffl the synchronization cost is now linear to the number of processors.
The above analysis tells us that when the total memory is fixed, then it is still beneficial to
increase the number of processors as it is shown that the major cost, the computation, is linear
to the reciprocal of the number of processors.
On the other hand, if the number of processors is kept unchanged, but the buffer size in each
node (i.e. block size P ) varies, then the computation cost is linear proportional to the number of
pages in a block (P ). Thus, with smaller block size, fewer computations are necessary. Besides,
the local block number
increases, which makes the computation-cost-reduction-ratio of the
optimization of the algorithm (in Section 3.4) become smaller. However, it is not recommended
to use a too small buffer because when P is too small, then N AE pNP P and the effect of
reduction of computation cost by small P will be very small. Besides, smaller block size also
increases the cost of disk I/O, communication and synchronization.
4.2 Comparision of PENL with ENL
What about comparing PENL with ENL when both are given the same amount of memory?
For ENL, the corresponding cost is:
ii
is the number of pages in a block in ENL. In ENL, the buffer is divided into two arrays,
so the total amount of memory is 2P 1 . In PENL, each local buffer is divided into three arrays,
so the total amount of memory is 3pP . Giving the same amount of memory to PENL and ENL,
. Better implementation can be made so that it is sufficient to divide
each local buffer into two arrays. However, here and in later sections, we will still choose three
arrays in order to give advantage to the sequential algorithms for comparisons but we can still
show that our parallel algorithm outperforms them.
Parallel Mining of Outliers in Large Database 15
Assume that, in a worse case, the ratio of number of calculations actually done in ENL to
the total sum of number of calculations actually done in all nodes in PENL is 2. Let f be
the fraction of number of calculations actually done in ENL, so the fraction of that of PENL is
2f .
In ENL, the computation cost is )In PENL, the computation cost is
CPENL;comp
is linear to the number of processors. Thus it is always a better choice to use PENL even if the
total buffer size is fixed.
Performance Studies
5.1 Experimental Setup and Implementation
In our experiments, our base dataset is an 248-object dataset consisting of trade index numbers of
HKSAR from 1992 to 1999 March [5]. Each object is from one of the four categories: imports,
domestic exports, re-exports, and total exports. The four categories have equal number of
objects. Each object has six attributes: index value, year-on-change percentage change of index
value, index unit value, year-on-change percentage change of index unit value, index quantum,
year-on-change percentage change of index quantum. Since this real-life dataset is quite small,
and we want to test our algorithms on a large, disk-resident dataset, we generate a large number
of objects simulating the distribution of the orginal dataset. In our testing, the distance D is
defined so that the number of outliers is restricted to within a few percents of all objects to
simulate the real situation.
The programs were run in an IBM 9076 SP2 system installed in the University of Hong
Kong. The system consists of three frames. Each frame consists of 16 160 MHz IBM P2SC
RISC processors. Individual node has its own local RAM (128 MB or 256 MB) and local disk
storage (2 Gb, for system files and local scratch spaces). Each node in a frame is interconnected
by high performance switches and the three frames are also linked up by an inter-frame high
performance switch. The theroretical peak performance for each processor is 640 MFLOPS.
In our tests, the sequential programs and parallel programs were run in dedicated mode using
loadleveler (the batch job scheduler). In Section 5.2, NL and ENL were run in another system
because the SP2 system has a time limit of 10 hours on running. It is a Sun Enterprise Ultra 450
with 4 UltraSPARC-II CPU running at 250MHz, with 1GB RAM and four 4.1GB hard disks.
In SP2, in order to make the comparison fair, we will fix the total amount of memory of all
nodes for PENL to be the same as that for NL and ENL, which is able to hold 75000 objects.
The number of objects is chosen so that the number of blocks in our test can be more reasonable.
Hung and Cheung
As a result, the number of objects in a block (NP P ) for NL, PENL with 2, 4, 8, processors
is 37500, 12500, 6250, 3125, 1563 respectively. The number of objects is 50000, 100000, 200000,
400000, 800000, which implies that the number of blocks is 2, 4, 8, 16, 32 for PENL and 2, 3, 6,
11, 22 for NL.
We have implemented the three algorithms, NL, ENL and PENL using C. MPI (Message-
Passing Interface) library was used in PENL for message passing among multiple processors
[15].
For PENL, better implementation can be made so that it is sufficient to divide each local
buffer into two arrays, rather than three arrays. However, we still chose three arrays for simplicity
in order to give advantage to the sequential algorithms for comparisons but we can still show
that our parallel algorithm PENL outperforms them.
It should be noted that for PENL, in each node, a part of memory is needed to act as counts
for all objects. The decrease in memory to act as counts will decrease P (the number of pages in
a block) and increase the total cost. However, the addition of the cost is small compared with
the improvement from NL. It is because an object in a database usually contains tens or even
hundreds of attributes, which may be integers, floating points or even strings. The size of a
count is very small compared with the size of an object, so P will only decrease a bit. When the
total number of objects are very huge, it is undesirable to hold all counts in the memory, then
the counts can be stored in local disks. The total size of the counts is very small compared with
the size of datasets. Thus, the extra disk I/O time accessing the counts affects the performance
a bit only. In our implementation of ENL and PENL, we have the counts resident in disk and
we load them only if they are required. This method is good, but it induces extra disk I/O In
our experiments, we decide to define an object to have six dimensions (long integer data type),
in order to make the effect of reading and writing the counts more significant. However, our
results show that the effect is very minor, compared with the reduction of computation cost.
Besides, it needs extra communication to transfer the counts of objects to the node containing
the objects in its disk, so that the outliers can be reported as soon as possible. The better way
is that, in the end, the counts are gathered to a node and then the outliers are reported from
combining the counts. The extra communication cost is little compared with the computation
cost. We chose the second method in our implementation.
The final point to note is the computer architecture. Each processor has its own cache, so
more the processors, the larger is the total cache capacity. Thus the hit ratio can be larger
and the performance is further enhanced. Besides, with existing workstations, a cluster can
be formed on them with low cost to perform PENL, rather than installing a new advanced
but costly supercomputer. Although our experiments were conducted in a supercomputer, our
results show that the commuication cost is very minor. Thus, communication network of low
cost is sufficient.
5.2 NL vs ENL
In this section we will compare the performance of NL and ENL.
Parallel Mining of Outliers in Large Database 17
object number 2000 8000
execution time
computation time
disk I/O time
calculation number 974467 618697 23095637 17763925
ENL execution time
NL execution time 0.7459 0.8487
Figure
1: Table of comparisons of NL and ENL
object number 32000 128000
execution time
computation time
disk I/O time
calculation number
ENL execution time
NL execution time 0.8636 0.8743
Figure
2: Table of comparisons of NL and ENL (cont.)
From
Figure
3 we can see that ENL is better than NL, although the improvement is not very
great. Moreover, from the tables in Figure 1 and 2, it is found that the major cost is from the
computation of distance, which can be greatly reduced in PENL as we will show later. Besides,
we can see that the increase in execution time is approximately quadratic to the increase of
objects, i.e. the execution time complexity is O(N 2 ), indicating that it is very unlikely to use
NL or ENL to deal with large number of objects. However, PENL can help to reduce the time.
5.3 Sequential vs Parallel
Here we will compare the performance of sequential program NL and parallel program PENL.
Figure
4 and 5 show that PENL with various number of processors outperforms NL, whatever
the number of objects is. Even the number of processor is two only, the performance is improved
by more than 100 percents. As we have said, the total amount of memory given to PENL is the
same as that to NL, so it is very clear that PENL is always a better choice when a multiprocessor
system or a cluster of workstations is available.
The result of NL with 400000 and 800000 objects and that of PENL with 2 processors and
800000 objects are not available because the execution time exceeds the time limit of a job in
the SP2 system.
Hung and Cheung0.750.852000 8000 32000 128000
Object number
Figure
3: Comparison of execution time of NL and ENL
processor number number of objects
50000 100000 200000 400000 800000
Figure
4: Table of comparisons of execution time of NL and PENL in seconds100100001(NL) 2 4 8
Processor number
Execution
time (sec) 50000 objects 333100000 objects
200000 objects 222400000 objects \Theta
\Theta \Theta
\Theta
\Theta
800000 objects 44
Figure
5: Execution time against number of processors
Parallel Mining of Outliers in Large Database 19
processor number execution CPU I/O communication synchronization
Figure
Table of comparisons of different costs in NL and PENL with 100000 objects in seconds
5.4 Variation of processor number
In this section we will see how the performance of PENL is related to the number of processors.
From
Figure
4 and 5, we can see that the nearly straight lines are dropping steadily almost in
parallel, indicating that the scalability is stable. In all cases, the execution time is almost halved
when the number of processors is doubled, which is near to what our theoretical analysis predicts,
i.e. the execution time is approximately linear to the reciprocal of the number of processors.
Again, the increase in execution time is approximately quadratic to the number of objects, i.e.
the execution time complexity is O(N 2 ). But it should be noted that the execution time is linear
to the dimensionality, thus it is still preferable for those database of high dimensionality.
5.5 Comparison of computation, disk I/O, communication time, synchronization
time
In this section we will look more clearly into the contribution of execution time from the com-
putation, disk I/O, communication and synchronization time in PENL.
Figure
6 shows that over 99 percents of the execution cost comes from computation time.
Since PENL distributes the computation operations among all processors nearly evenly, so the
execution time can be reduced greatly. Further improvement can be considered to focus on how
to reduce the computation operations.
On the other hand, the disk I/O, communication and synchronization time are much more
minor. Their trends are not the same as our theoretical analysis. The disk I/O time increases
very slowly with the number of processors because there are more reading and writing of counts
as the block size is smaller and the number of blocks loaded and transfered is larger (when a
new block comes, the counts of the old block are written and the counts of the new block are
read). The sum of the number of pages accessed by all processors should be close no matter how
many processors are being used, but the total number of pages accessed for the counts increases
with the number of processors as the counts are read and written more in times. Thus the disk
time increases a bit. The communication time and synchronization time depend much on
the system at the moment of execution, e.g. the bandwidth and condition of the communication
network.
20 Hung and Cheung
6 Discussion and Related Works
NL algorithm is a straight forward method to mine outliers in a database. ENL proposed reduces
both of the computation and disk I/O costs. Furthermore, the algorithm PENL is proposed to
parallelize ENL. The analysis shows that if the total buffer size in the system is fixed, then the
computation cost is linear to the reciprocal of the number of processors, which is verified by
our performance studies. The great improvement is caused by the nearly even distribution of
computation operations among all processors. Our performance studies further indicate that
over 99 percents of the execution time comes from the computation, so the execution time is
also linear to the reciprocal of the number of processors. These results show that PENL is
very efficient comparing with NL and ENL, and further improvement can be focused on how to
reduce the computation operations. Since other costs like communication time is very minor,
so a low-cost cluster of workstations with commodity processors, interconnected by a low-cost
communication network can be chosen as the platform of running PENL, rather than much
more expensive supercomputer. A cluster is also much cheaper and easier to build, maintain
and upgrade to achieve the similar performance that NL has in a single high performance
processor system.
Breunig, et al. introduced a definition of a new kind of outliers (density-based outliers) and
investigates its applicability [4]. Their heuristic can identify meaningful local outliers that the
notion of distance-based outliers cannot find. The first step of computation of LOF (local outlier
factor) is the materialization of the MinPtsUB-nearest neighborhoods (pages 102-103 of [4]).
Modification can be made in our parallel algorithm to perform that step, which is also
the most expensive step in computation of LOF. Instead of updating the count of objects in
D-neighborhood of each object, now each node stores the temporary MinPtsUB-nearest neighborhood
of each object. The final MinPtsUB-nearest neighborhood of each object is obtained by
combining all the temporary MinPtsUB-nearest neighborhoods of that object calculated by all
nodes. We can choose to parallelize NL algorithm instead of using PENL algorithm for simplifying
the implementation and reducing the disk storage space for temporary MinPtsUB-nearest
neighborhoods. In that case, each node stores the MinPtsUB-nearest neighborhoods of objects
in the block that stays in the first array only. The only difference is the reading order of blocks
and the increase of number of blocks I/O and computation (by amost doubling).
Similarity search in high-dimensional vector space using the VA-File method outperforms
other methods known [16]. Detection of outliers based on the VA-File is an approach different
from the approaches of nested-loop or cell-structure. We will take that (using VA-File) into
consideration in our future works.
--R
"Mining association rules between sets of items in large databases,"
Outliers in Statistical Data
"Scientific Computing on Bulk Synchronous Parallel Architetures,"
"LOF: Identifying Density-Based Local Outliers,"
"Trade Index Numbers"
"A density-based algorithm for discovering clusters in large spatial databases with noise,"
"Knowledge discovery in databases: An attribute-oriented approach,"
of Outliers
"Parallel Algorithm for Mining Outliers in Large Database,"
On digital money and card technologies
"Finding aggregate proximity relationships and commonalities in spatial data mining,"
"A unified notion of outliers: Properties and computation,"
"Algorithms for Mining Distance-Based Outliers in Large Datasets,"
"Efficient and effective clustering methods for spatial data mining,"
"A Quantitative Analysis and Performance Study for Similarity-Search Methods in Hifh-Dimensional Spaces,"
"BIRCH: An efficient data clustering method for very large databases,"
--TR
Mining association rules between sets of items in large databases
Finding Aggregate Proximity Relationships and Commonalities in Spatial Data Mining
A Quantitative Analysis and Performance Study for Similarity-Search Methods in High-Dimensional Spaces
Algorithms for Mining Distance-Based Outliers in Large Datasets
Knowledge Discovery in Databases
Efficient and Effective Clustering Methods for Spatial Data Mining
A unified approach for mining outliers
On Digital Money and Card Technologies | outlier detection;data mining;parallel algorithm |
607278 | Simple 8-Designs with Small Parameters. | We show the existence of simple 8-(31,10,93) and 8-(31,10,100) designs. For each value of we show 3 designs in full detail. The designs are constructed with a prescribed group of automorphisms using the method of Kramer and Mesner KramerMesner76. They are the first 8-designs with small parameters which are known explicitly. We do not yet know if PSL(3,5) is the full group of automorphisms of the given designs. There are altogether 138 designs with designs with PSL(3,5) as a group of automorphisms. We prove that they are all pairwise non-isomorphic. For this purpose, a brief account on the intersection numbers of these designs is given. The proof is done in two different ways. At first, a quite general group theoretic observation shows that there are no isomorphisms. In a second approach we use the block intersection types as invariants, they classify the designs completely. | Introduction
In this paper, t-designs with prescribed automorphism group are constructed. The
method was introduced by Kramer and Mesner in [8]. We choose as group PSL(3; 5)
and construct 8-(31; 10; ) designs with two different values of . We get 1658
designs with designs with questions immediately
arise:
1. Are the designs all distinct, i.e. pairwise non-isomorphic, or, if not, which of
them form a transversal of the isomorphism classes?
2. What is the full group of automorphisms of each of the designs?
3. Are there more designs for other values of ?
4. Are there more designs with a possibly smaller group of automorphisms?
In the following sections, we will answer question 1 twice and question 3 partly.
Problem 2 would be easily solved if it were known that PSL(3; 5) is a maximal
subgroup of S 31 . Note that this fact would imply that 1 is true. Indeed, we will
show in Section 7 that designs with the same automorphism group are isomorphic
if and only if they are isomorphic under the normalizer of this group.
The plan of this paper is the following: In Sections 2 and 3, the method of Kramer
and Mesner is briefly sketched. We will give a list of all orbits of the group on 10-
subsets which is needed to describe the designs. In Section 4, we recall basic facts
about parameters of designs and about intersection numbers. We introduce the
equations of Mendelsohn and Kohler. Moreover, we define intersection numbers of
higher order and list the relevant generalizations of the parameter equations due to
Tran van Trung, Qiu-rong Wu and Dale M. Mesner. We also define global intersection
numbers and use the generalized equations to provide means for checking
them.
The following two Sections 5 and 6 are devoted to the 8-(31; 10; 100) and 8-
respectively. For each of these cases, the parameter equations
are shown. As the numbers involved tend to become quite large in some cases, this
can be of great help avoiding tedious hand-calculations. In fact, all these calculations
were done by a computer using a long-integer arithmetic. For each value of
designs are listed in full detail. They should serve as examples. The interested
reader may reconstruct the full set of designs using our program
which is freely available on the internet. The numbering of designs is imposed by
the order in which the solutions are computed by the equation solver of
(this program is deterministic, so that the order is always the same). See [17] for a
more detailed treatment on solving large equation systems with integral coefficients.
Finally, in Section 7 the two announced proofs of Problem 1 are given. The first
applies group theoretic methods together with some (small) computer calculations.
The second is a more combinatorial one. It uses intersection numbers as invariants
to show that no two designs are isomorphic.
Problem 4 is beyond the scope of this paper.
2. The Group and its Orbits
We denote the elements of the field GF (5) by 0; 4. The elements of the
projective geometry PG 2 (5) can be identified with the one-dimensional subspaces
of GF (5) 3 . We number them in the following way using representatives (a; b; c) t
for the one-dimensional subspace h(a; b; c) t i generated by scalar multiples:
26 "
28 "
The group PSL(3; 5), represented as a permutation group on PG 2 (5) is generated
by the following permutations:
The group is of order
We are now going to construct t-(v; k; ) designs on the set
vertices and with contained in their automorphism groups. Thus,
the parameter v is 31 and we want to construct 8-designs, i.e. 8. Moreover we
leave open, in fact our method of construction shows that
and are fine.
Consider a putative design which has A as a subgroup of its automorphism
group. In this case, the set B of blocks decomposes into a collection of
full orbits of A on k-sets. In order to describe the design, we only need to know
which orbits (among the total set of orbits of A on the set
\Delta of all the k-subsets
of the set V of points) belong to the design. Therefore, we label the A-orbits and
refer to these numbers later on. In our case, we compute the orbits of PSL(3; 5) on
i-subsets for all i which are less than or equal to 10. Table 1 shows the number of
orbits.
Table
1. Number of orbits of PSL(3; 5) on i-subsets of PG 2 (5)
# i-orbits of A 1
The following table shows all 10-orbits of A on V . The stabilizer order is indicated
by a subscript. The orbit length is the index of the stabilizer in A. We give the lexicographically
minimal representative within each orbit. This list of representatives
is not lexicographically ordered, due to the fact that we do not generate orbits via orderly
generation. Instead of orderly generation, we use an algorithm Leiterspiel [12]
(snakes and ladders) to provide orbit representatives and further knowledge needed
for the evaluation of Kramer-Mesner matrices (see below). As the representatives
all start with the sequence 1; 2; 3; consecutive numbers, only the last of
these numbers is shown. The first part is replaced by the symbol ': : :'. So, the set
displayed as 28g.
10-orbits:
2:
3:
5:
7:
8:
3. Orbit Selection
The designs are constructed using the Kramer-Mesner matrix M A
consists in our case of 42 rows and 174 columns (recall that
1). The entry m ij is the number of k-subsets in the j-th
orbit of A on k-subsets containing the representative of the i-th orbit on t-subsets.
Hence, the f0; 1g-solutions of the Diophantine system of equations
(2)
are exactly the possible ways of choosing suitable block orbits (the chosen columns)
which fulfil all the conditions of a t-(v; k; ) design admitting the prescribed automorphism
group A. Namely, such a solution is a collection of group orbits on k-sets
such that each representative T of a t-orbit is contained in exactly k-sets from all
the chosen k-orbits.
This system was completely solved by the LLL-based algorithm as described
in [17]. There are exactly 138 solutions for solutions for
solutions exist for other values of 126 for this system of equations.
The enumeration of all solutions is a backtracking-algorithm over the integral
linear combinations of the LLL-reduced basis-vectors of the corresponding Kramer-
Mesner system. In order to speed up the search one can parallelize the algorithm
as described in [6]. Nevertheless, the designs presented here were found with the
sequential version of the program within a few hours.
4. Intersection Numbers of Designs
In this section, we recall some basic facts about parameters and intersection numbers
of designs. We will make use of intersection numbers in Section 7 when proving
the fact that the 8-designs with PSL(3; 5) are pairwise non-isomorphic. Intersection
numbers have a long history in design theory, early results were obtained by
Mendelsohn [10] and Stanton and Sprott [14]. They can be generalized to higher t,
we will show them soon. The equations of Kohler [7] support the evaluation of
these formulae. We will also speak about generalized intersection numbers, which
already appeared in [10]. Recent progress was made by Tran van Trung, Qiu-rong
Wu and Dale M. Mesner [16].
be a simple t-(v; k; ) design on the set of points V with
g be the blocks with
Fix disjoint subsets I and J of V with t. Define
r the number of blocks which contain a given point and
Ray-Chaudhuri and Wilson proved in [11] that these numbers i;j are independent
of the choice of the sets I and J (depending only on their cardinalities i and j).
They can be computed by the following formula
The following recursion holds for
This is the same recursion as in the well-known Pascal-triangle of binomial coeffi-
cients. Here, one also speaks of the intersection triangle of the design. For sake of
simplicity, put i := i;0 .
For an arbitrary fixed m-subset M of V we define for
6 BETTEN, KERBER, LAUE, WASSERMANN
the i-th intersection number of M with D. The reference to the set M will sometimes
be omitted. It should then be clear from the context which set M we are referring
to.
Again, let M be an arbitrary m-subset of V . Fix an integer i with 0 i t.
Counting the set
in two different ways, one arrives at the equations of Mendelsohn [10]:
(for all Writing down the system of equations we obtain a rectangularly
shaped matrix with integeral coefficients fomed by the binomial numbers. In
its first min(t+1; m) columns, this matrix is upper triangular with all diagonal entries
equal to 1. In the case m ? t we have some additional columns corresponding
to intersection numbers ff t+1 (M
Of particular importance for our applications are the block intersection numbers.
Here, M is chosen to be a block B of the design itself (and thus
In this case, ff k (B 0 ) is always equal to 1 since we allow only simple designs. The
equations of Mendelsohn read as
t.
We remark the following fact for the case m ? t. Assume we know the intersection
numbers ff t+1 (M late intersection numbers). Then, since the
coefficient matrix is upper triangular and has ones on the main diagonal one can
easily compute the remaining numbers ff 0 (M
numbers). Kohler gives explicit equations for the early intersection numbers. In [7],
he proves that
for t. The terms early and late intersection numbers should not be mixed
up with intersection numbers of higher order which will be introduced in the sequel.
For any B 0 2 B, the vector (ff 0 (B called the block intersection
type of B 0 (in the design D). The equations of Kohler show that only the essential
block intersection numbers are needed, that is (ff t+1 (B
call this vector the essential block intersection type.
Clearly, block intersection types are constant on orbits of the automorphism
group. So, when computing designs as orbits of some automorphism (sub-) group,
we need only specify block intersection types for each of our orbit representatives.
We will do so later when we specify the 8-(31; 10; ) designs as sets of orbits.
Let now be the representing sets for the A-orbits of blocks in
the design (not to be mixed up with all A orbits as in Section 2). Let K A
h be the
corresponding orbit under the group A
define the global intersection number as
By adding up the intersection types of all blocks of the design one gets the following
formula - we count all intersections twice, therefore the factor 1:
The vector (ff is the global intersection type of pairs of blocks of
the design. Clearly,
but we will find more equations for global block intersection types in the follow-
ing. In order to achieve this, let us introduce intersection numbers of higher order
(already introduced by Mendelsohn [10]).
For an arbitrary fixed m-subset M of V and b s 1 define for
ff
s
the i-th intersection number of order s of M with D. In the case when
reduces to ordinary intersection numbers. If s is at least two and if m is at least k,
ff
as we have excluded designs with repeated blocks.
It can be shown (see Tran van Trung, Qiu-rong Wu, Dale M. Mesner [16]) that the
equations of Mendelsohn can be generalized in the following way (for an arbitrary
m-subset M of V , b s 1 and 0 i t):
ff
'' i;0
s
The following generalization of Kohler's equations has also been proved in [16].
Again, let M be an (arbitrary) m-subset of V and let 0 i t. Then, for each
ff
s
For reduce to (9), i.e. the equations for ordinary intersection
numbers. Again, we see that only the essential block intersection numbers (of higher
need to be specified (for B a block of the design).
Global intersection numbers of order s of the design D can be defined in the
following way:
ff
s
Clearly, in the case back the values ff i (D) which we already know.
Again, global intersection numbers can be computed by cumulating intersections
over all block orbits:
ff
s
s
These numbers can be checked in the following way: Choose apply (14).
This gives for
ff
'' i;0
s
To see this, one verifies that ff
by definition. We stopped
the summation on the left after the k-th coefficient since clearly, ff
1). In the case when
get back Equation (12) - recall that ff (2)
Applying are able to compute ff
(ff
(D)). The latter vector is called the essential global block intersection
type of order s of the design. For s ? 1, ff
k (D) vanishes.
5. 8-(31; 10; 100) Designs
5.1. Parameters and Intersection Equations
The intersection triangle of i;j for
The following values are helpful for the verification of some of the intersection
numbers:
The System (7) of Mendelsohn for arbitrary M ' V of size
The Equations (9) of Kohler are:
These equations are important in particular if M is a block B 0 of the design. In
this case ff 10 (B and the essential block intersection type consists of just one
namely ff 9 (B 0 ). If we consider generalized Mendelsohn Systems (14), only
the right hand side differs from the case 1. For the 8-(31,10,100) designs, we
get the following vectors for
30; 140215; 072989; 385000
266; 928655; 539000
Next, we evaluate the generalized Kohler Equations (15). Choosing
using the equalities ff
ff (2)
9 (D)
ff (2)
9 (D)
ff (2)
9 (D)
ff (2)
9 (D)
ff (2)
9 (D)
ff (2)
9 (D)
ff (2)
9 (D)
ff (2)
9 (D)
ff (2)
9 (D)
For
ff (3)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
5.2. The Designs
We display 3 out of the 1658 designs for 100. The designs are collections of full
orbits from the list of 10-orbits of the group. Here, we list only the orbit numbers
using the labelling of orbits of Section 2.
66, 67, 68, 70, 72, 76, 77, 78, 79, 84, 87, 88, 91, 92, 94, 96, 100, 102, 105, 106, 108, 113,
114, 117, 118, 120, 121, 124, 126, 136, 139, 141, 143, 145, 147, 148, 149, 151, 153, 156,
157, 160, 164, 165, 171, 173.
Block intersection types:
ff 9 (B h
ff 9 (B h
The following table shows the global intersection numbers of all 2-sets of blocks
(all 3-sets of blocks). The column sums are
respectively. The values
of these tables contain a lot of redundancy. According to (15), only ff (2)
9 (D) and
ff (3)
9 (D) really matters. The other values follow. In fact, all the numbers in these
tables have been computed from the orbit data. So, verification of (15) via (24)
and (25) really is a good test for the correctness of our algorithms to compute
intersection numbers.
6 4,353419,605500 13800,712024,071000
9 699,336750 1699,063500
The number ff (2)
9 (D) can be computed according to (11) as the following sum.
The pairs of numbers of the form 'a \Theta b' give the multiplicity (a) together with the
intersection number ff 9 (b). The greatest common divisor of all the multiplicities is
taken out of the sum.
\Theta 73+30 \Theta 75+78 \Theta 76+72 \Theta 77+128 \Theta 78+
112, 113, 120, 126, 130, 131, 132, 134, 141, 142, 144, 145, 146, 148, 149, 151, 153, 156,
157, 160, 164, 170, 171, 173.
Block intersection types:
ff 9 (B h
ff 9 (B h
ff 9 (B h
Global intersections:
6 4,353200,869500 13800,710711,655000
9 701,940750 1714,687500
74+24 \Theta 75+36 \Theta 76+132 \Theta 77+116 \Theta 78+84 \Theta 79+
216 \Theta 80+112 \Theta 81+138 \Theta 82+123 \Theta 83+68 \Theta 84+12 \Theta 85+4 \Theta 87+12 \Theta 88+12 \Theta 89)
117, 119, 120, 125, 128, 129, 130, 132, 133, 134, 140, 141, 143, 144, 146, 148, 149, 153,
157, 160, 163, 164, 170, 171, 173.
Block intersection types:
ff 9 (B h
68, 99, 106, 111, 144, 148, 163g, ff 9 (B h
for
f164g.
Global intersections:
6 4,353521,161500 13800,712695,903000
9 698,127750 1691,065500
73+12 \Theta 74+60 \Theta 75+72 \Theta 76+114 \Theta 77+106 \Theta 78+
129 \Theta 79+252 \Theta 80+164 \Theta 81+48 \Theta 82+24 \Theta 83+48 \Theta 84+42 \Theta 85+42 \Theta 86+6 \Theta 87)
Designs
6.1. Parameters and Intersection Equations
Again, we list 3 of the designs, now with We have
435240
Some useful values are:
The system of Mendelsohn for
The equations of Kohler are:
The generalized Mendelsohn Systems (14) have the following right hand side (for
94716; 711180
The generalized Kohler equations applied to
are (the ff
(D)-terms with are left out):
ff (2)
9 (D)
ff (2)
9 (D)
ff (2)
9 (D)
ff (2)
9 (D)
ff (2)
9 (D)
ff (2)
9 (D)
ff (2)
9 (D)
ff (2)
9 (D)
ff (2)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
ff (3)
9 (D)
6.2. The Designs
57, 60, 64, 65, 72, 75, 76, 81, 83, 84, 85, 91, 92, 94, 96, 98, 103, 105, 107, 109, 113, 114,
116, 120, 124, 125, 126, 128, 131, 132, 136, 138, 139, 141, 147, 148, 149, 150, 152, 159,
Block intersection types:
ff 9 (B h
for
for
f2g.
Global intersections:
6 3,765061,999125 11100,433974,021000
9 603,084075 1347,012000
1200 \Theta 70+11400 \Theta 71+22800 \Theta 72+4800 \Theta 73+17400 \Theta 74+ 15800 \Theta 75+ 7200 \Theta
57, 60, 63, 69, 70, 72, 75, 78, 80, 84, 85, 90, 94, 95, 98, 100, 101, 103, 104, 105, 110, 116,
117, 121, 122, 125, 128, 130, 134, 135, 137, 138, 139, 143, 147, 148, 149, 152, 156, 159,
163, 167, 169, 170, 172.
Block intersection types:
ff 9 (B h
for
ff 9 (B h
78, 101, 105, 110g, ff 9 (B h
57, 69, 134, 137g, ff 9 (B h
ff 9 (B h f13g.
Global intersections:
6 3,764718,271125 11100,431120,037000
9 607,176075 1380,988000
3000 \Theta 71+14400 \Theta 72+13200 \Theta 73+22800 \Theta 74+12000 \Theta 75+ 7200 \Theta 76+ 4800 \Theta
55, 58, 60, 62, 64, 66, 71, 75, 76, 77, 78, 81, 85, 88, 90, 94, 97, 98, 105, 109, 111, 113, 116,
118, 119, 120, 125, 126, 127, 131, 132, 133, 136, 138, 140, 145, 146, 149, 151, 152, 159,
167, 169, 172.
Block intersection types: ff 9 (B h
ff 9 (B h
ff 9 (B h
ff 9 (B h
105, 113, 169g, ff 9 (B h
for h 2 f2g.
Global intersections:
6 3,765140,119125 11100,433336,041000
9 602,154075 1354,607000
(3 \Theta 10+100 \Theta 48+400 \Theta 60+200 \Theta 66+600 \Theta 67+2400 \Theta
21000 \Theta
7. Isomorphism Problems
This section addresses Problem 1 of Section 1. We answer the question posed there
by showing that all designs are non-isomorphic. This claim is proved in two different
ways.
7.1. First Proof
General group theoretic tools quite often suffice to solve the isomorphism problem
for designs constructed by the Kramer-Mesner method. This approach was already
partly used in [12], [13] and we first briefly report the basic idea from that papers.
be the full symmetric group on the underlying point set V: The following
lemma is useful when constructing objects with a prescribed automorphism group.
be designs with a group A as their (full) group of au-
tomorphisms. Assume that g 2 S V maps D 1 onto D 2 . Then g belongs to the
normalizer of A in SV .
If the prescribed group of automorphisms A is a maximal subgroup of S V different
from the alternating group then all designs found are pairwise non-isomorphic.
If A is not a maximalsubgroup one can apply a Moebius inversion on the subgroup
lattice to single out those designs having A as their full automorphism group and
then form the NSV (A) orbits on the set of these designs. These orbits, all of length
jN SV (A)=Aj; are just the different isomorphism types.
A severe drawback of this approach is that it relies on the knowledge of the set of
groups containing A in SV : Often the information on overgroups can be obtained in
some way from the classification of the finite simple groups. We want to show here
that in important cases we can avoid this laborious task by a localization technique.
We regard A as a guess for the automorphism group of the designs constructed.
A good guess might at least find a correct Sylow subgroup of the automorphism
group. Then the following holds.
Lemma 2 Let a finite group G act on a
2\Omega be fixed by a p-subgroup
P of G and g 2 G such that ! g
be a Sylow subgroup of
there is some x 2 NG (! 2 ) such that P by the Sylow Theorem. Then
If the prescribed subgroup A of the automorphism group of the objects that are
searched for contains a Sylow subgroup P of all the automorphism groups of the
objects then only elements from NG (P ) have to be applied to the objects as possible
isomorphisms. In our applications to t-designs it is often possible to show that no
design exists if the proposed subgroup P is extended to a larger p-group. Then the
assumptions of the lemma are of course fulfilled.
There is a problem if A is not normalized by NG (P usually, the set of
fixed points of A is not closed under NG (P just form the orbits of
in order to solve the isomorphism problem.
Lemma 3 Let G be a finite group acting on a
set\Omega and A G: Let A contain
a Sylow subgroup P of all designs admitting A as a group of automorphisms. If
acts on the set of fixed points
Fix\Omega (A) of A
in\Omega . If
Fix\Omega (A) and g 2 H with NH (A)g
Figure
1. Subgroups of Lemma 3
@
@
@
@
@
@
@
@
@
@
@
@
@ @
@
@
@ @
A
G
NH (A)NG (P
Suppose that D 1 and D 2 are two designs admitting A as an automorphism group.
If A is a large subgroup of the full symmetric group Sn then it happens very often
that
If this situation appears for each then the orbits of NH (A) on
Fix\Omega (A) are the different isomorphism types appearing in
Fix\Omega (A). If in addition
designs admitting A as an automorphism group are pairwise
non-isomorphic. Algorithmically, only representatives from the cosets NH (A)g in
H have to be considered in forming hA; A g i.
A remarkable feature of this approach is that the individual designs are not
touched upon. So, the isomorphism problem may be solved without knowing details
like orbit representatives etc. of the designs.
To solve the isomorphism problem for the 8-designs of this paper we use the fact
that contains a Sylow-31 subgroup P of S 31 (cf. Figure 2).
Figure
2. Special Situation for PSL(3; 5)
@
@
@ @
@
@
We choose the group P generated by
The normalizer of P in the full symmetric group is the holomorph of i.e. the
semidirect product of P with its automorphism group. This normalizer is not
contained in A but jNS31 (P This intersection has 10 right cosets
in NS31 (P Representatives of these cosets are given by the powers of the element
28 21)(3 22 1230 6
For
i and in each case obtain A a design which
has A 31 as a group of automorphisms must be the complete design.
Thus, by the above theory, all designs obtained as solutions of the Kramer-Mesner
system for !
are pairwise non-isomorphic.
7.2. Second Proof
The second proof of the fact that all designs are non-isomorphic is done using the
intersection numbers of Section 4.
In a first step, the global intersection type ff (2)
9 (D) is used in order to distinguish
between the designs. Clearly, two designs which have different intersection numbers
are non-isomorphic.
Coming back to Sections 5 and 6, we find
ff (2)
9 (D 1
ff (2)
9 (D 2
ff (2)
9 (D 3
for the designs with
ff (2)
9 (D 1
ff (2)
9 (D 2
ff (2)
9 (D 3
for those with These numbers are all distinct (which is fine for our pur-
poses!) but for the whole set of designs, there are coincidences.
For the 138 designs with different values of ff (2)
9 (D) in the range
from 591,366075 to 611,268075.
The following table shows the classes of designs with sorted according to
the value of ff (2)
9 (D). For each value, the indices i of the designs D i are given.
593; 226075 for f110g
594; 342075 for f95g
595; 830075 for f111g
596; 853075 for f87g
597; 039075 for f102g
597; 225075 for f107g
597; 318075 for f23; 128g
597; 504075 for f5; 35g
597; 597075 for f15; 46g
597; 969075 for f8g
598; 248075 for f126g
598; 341075 for f14g
598; 434075 for f118; 132g
598; 527075 for f96g
598; 806075 for f79g
598; 899075 for f30; 70; 112g
598; 992075 for f97; 100g
599; 085075 for f48g
599; 643075 for f49g
599; 829075 for f44; 119g
599; 922075 for f64g
602; 433075 for f13; 109g
602; 712075 for f34g
SIMPLE 8-DESIGNS 21
608; 199075 for f114g
610; 152075 for f78g
610; 803075 for f58g
Let us make some statistics first: The class sizes are distributed in the following
way:
# of classes of size i 48
The average class size is = 1:643, we get for the variance V ar = 0:86 and the
standard deviation
A better choice for an invariant is the multiset of all block intersection types of a
design. So, one starts with Equation (11) and collects equal terms of ff 9 (B h ) (in the
case of the 8-designs). This leads to an additive decomposition of ff (2)
9 (D) which is
a much finer invariant.
For example, the class of designs with ff (2)
9 (D
has the following different types of
block intersections (sorted lexicographically by the coefficients of the terms):
7200 \Theta 76
(3 \Theta 10+400 \Theta 63+1200 \Theta 64+600 \Theta 67+2400 \Theta 68+
3600 \Theta 69+6000 \Theta 70+6000 \Theta 71+12950 \Theta 72+8400 \Theta 73+14400 \Theta 74+11400 \Theta 75+
17100 \Theta 76+12000 \Theta 77+5600 \Theta 78+1200 \Theta 79+600 \Theta 80+700 \Theta 84+600 \Theta 85+30\Theta90)
(3 \Theta 10+200 \Theta 57+1800 \Theta 64+600 \Theta 67+6000 \Theta 68+
400 \Theta 69+4200 \Theta 70+4800 \Theta 71+15000 \Theta 72+10800 \Theta 73+16200 \Theta 74+13200 \Theta 75+
11400 \Theta 77
22 BETTEN, KERBER, LAUE, WASSERMANN
30+100 \Theta 48+600 \Theta 64+1800 \Theta 65+800 \Theta 66+
900 \Theta 68+3600 \Theta 69+6000 \Theta 70+3600 \Theta 71+13600 \Theta 72+10800 \Theta 73+13200 \Theta 74+
17000 \Theta
15000 \Theta
As a matter of fact, all designs (for can be distinguished
using this invariant.
The major drawback of using the ff (3)
9 (D) for classification purposes is simple:
these numbers are quite hard to compute because lots of intersections are involved.
For sake of completeness, we list the orbit indices of the remaining 5 designs (D 1
has already been shown in Section 6):
51, 55, 60, 61, 64, 66, 68, 72, 75, 79, 84, 85, 86, 90, 96, 98, 100, 105, 107, 108, 109,
111, 114, 117, 120, 127, 128, 130, 131, 133, 134, 135, 137, 138, 141, 142, 144, 149,
151, 152, 154, 159, 167, 169, 170, 172.
55, 56, 58, 60, 64, 66, 71, 72, 75, 76, 77, 81, 90, 92, 94, 95, 98, 100, 101, 102, 103,
52, 55, 56, 58, 59, 64, 67, 68, 72, 78, 80, 82, 83, 86, 91, 93, 96, 100, 101, 107, 112,
113, 114, 115, 116, 117, 119, 120, 124, 125, 126, 132, 135, 136, 147, 149, 152, 153,
156, 159, 162, 165, 167, 170, 171, 172.
55, 58, 60, 61, 62, 64, 67, 70, 72, 78, 79, 81, 83, 85, 86, 88, 92, 97, 98, 100, 103, 107,
111, 115, 117, 118, 119, 120, 121, 127, 131, 133, 134, 137, 139, 143, 149, 150, 151,
152, 155, 159, 162, 165, 167, 170, 172.
58, 60, 64, 66, 67, 71, 72, 73, 76, 77, 80, 82, 83, 87, 88, 100, 101, 104, 105, 109, 112,
117, 120, 121, 122, 127, 128, 130, 133, 137, 140, 141, 144, 147, 148, 149, 150, 151,
152, 154, 155, 156, 159, 165, 167, 169, 172.
In the case of the 1658 designs of type 8-(31; 10; 100) we get 219 different values
ff (2)
9 (D) in the range from 688; 455750 to 716; 169750. The distribution of class sizes
is the following:
# of classes of size i 38 20 15 15 14 13 6 8 11 7
# of classes of size
# of classes of size 20
The average class size is = 7:57. We have
The largest class of designs is of size 25. Namely, one gets ff (2)
9 (D
1127, 1208, 1288, 1299, 1426, 1459, 1507, 1545, 1585g.
As remarked above, in all cases the use of block intersection numbers allows to
distinguish between the designs.
8.
Acknowledgement
The first author likes to express his thanks to the Deutsche Forschungsgemeinschaft
which supported him under the grant Ke 201 / 17-1.
--R
Design theory.
The discovery of simple 7-designs with automorphism group P \GammaL(2
Simple 6 and 7-designs on 19 to 33 points
Some simple 7-designs
Allgemeine Schnittzahlen in t-designs
Intersection numbers of t-designs
Block intersections in balanced incomplete block designs.
Combinatorial configurations: designs
High order intersection numbers of t-designs
Finding simple t-designs with enumeration techniques
--TR
Design theory
Combinatorial configurations, designs, codes, graphs
The Discovery of Simple 7-Designs with Automorphism Group PTL (2, 32)
MYAMPERSAND#123;0, 1MYAMPERSAND#125;-Solutions of Integer Linear Equation Systems
--CTR
Anton Betten , Reinhard Laue , Alfred Wassermann, A Steiner 5-Design on 36 Points, Designs, Codes and Cryptography, v.17 n.1-3, p.181-186, Sept. 1999
Reinhard Laue, Solving isomorphism problems for t-designs, DESIGNS 2002: Further computational and constructive design theory, Kluwer Academic Publishers, Norwell, MA,
Johannes Grabmeier , Erich Kaltofen , Volker Weispfenning, Cited References, Computer algebra handbook, Springer-Verlag New York, Inc., New York, NY,
Johannes Grabmeier , Erich Kaltofen , Volker Weispfenning, Cited References, Computer algebra handbook, Springer-Verlag New York, Inc., New York, NY, | kramer-mesner method;isomorphism problem;group action;t-design;intersection number |
607292 | Jacobi Polynomials, Type II Codes, and Designs. | Jacobi polynomials were introduced by Ozeki in analogy with Jacobi forms of lattices. They are useful to compute coset weight enumerators, and weight enumerators of children. We determine them in most interesting cases in length at most 32, and in some cases in length 72. We use them to construct group divisible designs, packing designs, covering designs, and (t,r)-designs in the sense of Calderbank-Delsarte. A major tool is invariant theory of finite groups, in particular simultaneous invariants in the sense of Schur, polarization, and bivariate Molien series. A combinatorial interpretation of the Aronhold polarization operator is given. New rank parameters for spaces of coset weight distributions and Jacobi polynomials are introduced and studied here. | Introduction
While the use of invariants of finite groups to study weight enumerators of self-dual codes has a long and
distinguished history [22, Chap.19], they were introduced only recently in the study of the weight distribution
of cosets of self-dual codes [2, 19]. The polynomial invariant of the code on which a finite group acts by
linear substitutions of the variables is the Jacobi polynomial. Roughly speaking, Jacobi polynomials are
to binary codes what Jacobi forms [13] are to lattices. If the code is self-dual, they are invariant under
the same group that fixes the weight enumerator of the code and contain more information than the coset
weight distribution or outer distribution matrix in Delsarte sense. They are, however, strongly related to
the t-distribution matrix of [7] or outer distribution matrix in a Johnson scheme. New rank parameters for
spaces of Jacobi polynomials (b t ) and the outer distribution of a code (a t ) are introduced in this paper.
Introduced by Ozeki [19] Jacobi polynomials were studied by Bannai and Ozeki [2] by polarization techniques.
A very simple combinatorial interpretation of polarization is given here for codes whose codewords hold
t\Gammadesigns. Since the motivation of these authors was to build large spaces of modular forms they used a
group of order 96: For combinatorial purposes it seems more natural to use the larger group of order 192:
Our motivation is to construct designs of various kinds and in particular group divisible designs when the
Assmus-Mattson theorem cannot yield classical designs which do not exist anyways.
The paper is organized as follows. Section 2 collects definitions and some basic results. Subsections 2.3 and
2.5 are required reading for understanding the rest of the paper. Section 3 derives ( in a different way than
[19]) a MacWilliams formula for Jacobi polynomials thus solving a covering open problem of long standing.
Section 4 is devoted to bivariate Molien series. Section 5 studies invariants and how polarization produces
them. Section 6 studies examples of Jacobi polynomials in lengths 8,16,24,32, and 72.
Notations and definitions
2.1 Codes
All codes here are binary linear of length n. By weight w(x) we mean the number of 1's in x and by
Hamming composition the ordered pair (n \Gamma w(x); w(x)): A self-dual code is said to be Type II if its
weights are multiple of 4 and Type I otherwise. Let A i stands for the number of codewords of weight i; and
A i (x) for the weight distribution of the coset x C: The weight of a coset x + C is the smallest i ? 0
such that A i (x) ? 0: The Covering Radius R is the largest weight of a coset. The weight enumerator
WC (x; y) is the generating series
The joint weight enumerator J A;B of two codes A and B is
i;j;k;l
where A i;j;k;l is the number of pairs (a; b) in A \Theta B containing i patterns 10, j patterns 11, k patterns 00, l
patterns 01.
2.2 Designs
A design with parameters
N )) is a collection of k\Gammasets called blocks of a v-set (the
varieties) and a partition of the set of all t\Gammauples into N groups such that every t\Gammaset within a group of
a i such t-sets is contained in exactly - i blocks of size k:
A t-design with no further precision will mean a design with
A packing (resp. covering) design with parameters t \Gamma (v; k; -) is a design with
9]. The minimum (resp. maximum) size of a covering (resp. packing) design is
denoted by C - (v; k; t) (resp. D - (v; k; t)).
A group-divisible block design (GDD for short) [5] noted GD(k; with blocks of
size k and N groups of size g such that pairs in the same group occur in - 1 blocks and pairs with varieties
belonging to two different groups occur in - 2 blocks. GDD incidence matrices produce distance regular
graphs of diameter 3 [5, x1.10]. Observe that a GDD is a
design with a 1 =number of
pairs in the same block and a 2 =number of pairs in two different blocks.
We shall need the following consequence of the Assmus-Mattson theorem.
Theorem 1 If C is an extremal Type II code of length n congruent to 0 (respectively 8; 16) modulo 24 then
the vectors of any given weight in C hold a 5\Gammadesign (respectively a 3\Gammadesign, a 1\Gammadesign).
Define the Assmus-Mattson index am(n) of an integer n divisible by 8 as
congruent to s modulo 3 and 2: The above theorem says that an extremal Type II code C of length
n is an am(n) design.
2.3 Enumerators
By the Jacobi polynomial attached to a set T of coordinate places of a code C of length n over F 2 we
shall mean the polynomial in four variables
w m0 (c) z m1 (c) x n0 (c) y n1 (c)
where T ' [n]; and where m i (c) is the Hammingcomposition of c on T and n i (c) is the Hammingcomposition
on
The basic observation is that if T supports a coset leader x T of x is the weight
enumerator of x is constant for all T such that jT then the codewords whose monomials
have hold a t-design possibly with several groups (one for each possible J C;T ).
Relations with two other polynomial invariants of a code are to be noted.
2.4 Joint weight enumerators
First, the Jacobi polynomial is, up to variables permutation, the joint weight enumerator of C with the
singleton code reduced to the characteristic vector x T of T: Let J A;B denote the joint weight enumerator of
two linear codes A and B. Denote by ! T ? the monodimensional code spanned by x T : The following is
immediate from [22, p.147].
2.5 (t; r)-designs
Second, consider the set of codewords of weight j of C as a collection B j of j \Gammasubsets of an n\Gammaset. Let
the coefficient of w i z t\Gammai x n\Gammaj y j in J C;T . Then the
matrix with generic element
is the t-distribution matrix D t of [7]. A t\Gammaform is a row vector which is sent to a constant vector by
left multiplication by D t : A (t; r) design in the sense of [7] is a collection B j such that the space of t\Gammaforms
has dimension t is shown in [7] that the rank of D is either r of r + 1). It is shown in [7] that a
t\Gammadesign with design and conversely.
(C) denote the dimension of the real vector space of Jacobi polynomials J C;T with T ranging over
t-sets. Any upper bound on b t (C) yields an upper bound on r uniform in j: For instance that all codewords
of given weight hold a t\Gammadesign entails b (the converse is false as can be seen for e 8 and
that all codewords of given weight hold a (t; 1)\Gammadesign. In the case of C of type II a trivial
upper bound on b t (C) is the coefficient of u n\Gammat v t in the Molien series defined below.
A trivial lower bound is the rank a t
(C) say, of the
bound is not applicable (NA in the tables below) for t ? R: A well-known result due to Delsarte is that
R
a t
where s 0 is the number of nonzero dual weights. Recently there was some work on J C;T with jT
[8, 6, 7, 18] but little on jT 1: In this article we will show how to construct GDD in some of
these two cases, but, as examples of C for will show, there is little hope of a general upper bound
on b am(n)+1 even for the restricted class of extremal codes .
3 A MacWilliams formula for Jacobi Polynomials
We give an independent derivation of the MacWilliams relation for Jacobi polynomials of [19].
Theorem 2 (Ozeki) Let C be a binary linear code.
Proof:Follows by Lemma 1 and the MacWilliams relation for the ordinary and joint [22, p.148] weight
enumerator. 2
Observe that this essentially solves Open Problem 15 of [10, xIX].
4 Molien Series
It has been known since Gleason's address to the International Congress of Mathematics of Nice 1970 [14]
that the weight enumerator of a type II binary code is left invariant by a group G 2 of order 192 generated
by two matrices of size two by two defined as
and
corresponding respectively to the MacWilliams Transform, and mod 4 congruence condition. It is a simple
exercise, using Theorem 1, to show that, for any T , the Jacobi polynomial is invariant under the same group
acting in the same way on each pair of variables. This is therefore a simultaneous invariant in the sense
of Issai Schur [24] for the diagonal action of G There is a bivariate Molien series that enumerates invariant
polynomials by their homogeneous degrees in w; z and x; y. It can be shown [25, equ. (13)] that
where det(h) stands for the determinant of a matrix For instance in the case of the group G 2 a Magma
computation yields an expression for f(u; v) whose denominator factors as d(u)d(v) with
or, more suggestively
A Taylor decomposition yields after reordering terms of degree 40
6
terms of degree
5
terms of degree 24,
6
5
terms of degree 16,
terms of degree 8
and the constant term
Assume an Hironaka decomposition [24] of the algebra of invariants I of the type
where the j i s are so-called secondary invariants and where P is the algebra of primary invariants (or
homogeneous system of parameters) of the type
Then the bivariate Molien series can be written as
P a
In that expression dm (:) (resp. dn (:)) denote the degree in the first (resp. second) set of two variables. In
the case at hand we have primary invariants and a = 192 secondary invariants. One may take as
h.s.o.p. the system
(1)
(2)
5 Invariants
In principle some simultaneous invariants for a group G can be computed from usual invariants of the
Aronhold polarization operator A [24] which we now describe. Let P 0
y ) denote the partial
derivative with respect to variable x (resp. y). Let P (x; y) denote a polynomial in two variables. Define the
polarization operator A as
x
The following basic lemma can be found in a much more general form in [24].
is an invariant for G 2 then A:P is a simultaneous invariant for G 2 . The map P 7! AP is
injective, the inverse map being given by
Proof:For every complex scalar i the quantity
is a simultaneous invariant of G 2 . A Taylor expansion yields
Each coefficient of this expansion in powers of i is a simultaneous invariant. Setting y in the
preceding expansion and identifying coefficients of i on both sides yields the inversion formula. 2
There is a simple combinatorial interpretation of the polarization of the weight enumerator of a code.
The code obtained from C by puncturing (resp. shortening) at coordinate place i will be denoted
by We shall denote by C the coset of C=i into C \Gamma i. We shall say that a code is
homogeneous if the codewords of every given weight hold a 1-design, and more generally t-homogeneous
if the codewords of every given weight hold a t-design.
In that situation, the Jacobi polynomial J C;T does not depend on T for t; and for convenience,
this common value is denoted J
Theorem 3 For every binary code C and every coordinate place i we get
If C contains no word of weight 1 we have
If C is 1\Gammahomogeneous then
Proof:The first and second assertion are restatements of the definition of the Jacobi polynomial and
of the polarization operator. The third assertion follows by noticing that, in the second assertion, for an
homogeneous code the polynomials WC=i and WC+i are independent of i: It also follows from [22, Pb. (37)
p.233] which is a re-statement of Prange Theorem [21, Th. 80] in terms of generating series. 2
The generalization for all t is a bit more cumbersome to write down but no more difficult. We leave the
proof to the reader.
Theorem 4 If C is t\Gammahomogeneous and contains no word of weight - t then
A useful corollary concerns children C of a self-dual code D obtained by subtraction, i.e. taking all
codewords whose value on two given coordinates is 00 or 11 and puncturing at those places.
Corollary 1 If D is a self-dual t\Gammahomogeneous binary code with t - 2 then the weight enumerator of C is
As an application we recover the weight enumerators of both the extremal [70; 35; 14] [11, 18] extremal
Type I code and the elusive shadow extremal [68; 34; 12] code obtained [11] by subtraction from the putative
extremal [72; 36; 16] Type II code, whose Jacobi polynomials are calculated in x6.5. They are, respectively,
and
5.1 When polarization fails
The question arises: how many invariants in w; z; x; y are polarizations of invariants in x; y? The answer is
given in terms of generating series. Let D denote the bivariate generating series enumerating by bidegree the
invariants in four variables that cannot be obtained by successive polarizations of two-variable invariants.
Proposition denote the Molien series of G
Proof:By the injectivity of the polarization operation every term m j u j in the Taylor series of M (u)
yields geometrically as m j
The Taylor series up to degree 40 can be written in decresing degree order as:
In this case we use the Reynolds operator which is defined as in [22, p.609]. Let M 2 G 2 . Denote its
action on P (w; z; x; y) as M ffi P: With these notations we define the Reynolds operator as
The following result is a special case of [22, Th.4,p.609].
Proposition 2 The polynomial R ffi P is a simultaneous invariant of G
An example is given in x5.1. It was shown by EmmyNoether in 1916 that all invariants can be constructed
in that way [26, Th. 2.1.4].
6 Examples
Let w 8 and g 24 denote the weight enumerator of the extended Hamming and Golay codes of length 8 and
24: These codes hold respectively 3\Gamma and 5\Gamma designs. Let J 8;t and J 24;m denote for t - 3 and m - 5 their
Jacobi we corresponding to coset leaders of weight t and m. Similarly for the code E 16 of [20] let J 16;i be
the Jacobi polynomial of index i for 2: We denote by f [a] the homogeneous part of degree a of the
Molien Series f:
6.1 Length 8
This corresponds to J 8;s with 3: Alternatively this corresponds to A s w 8 with 3: As the
coefficient of w s y 8\Gammas in J 8;s should be 1; and the coefficient of u n\Gammas v s in the Molien series f(u; v) is 1 we
see that for
This also follows from Theorem 1. Specifically we get:
To actually compute a basis of the invariant space, we use the Reynolds operator acting on (wzxy) 2 which
gives, up to scalar multiple
r := (w 4
Let T be of size 4: Then we know that J C;T is a combination of r and AJ 8;3 . Two cases occur:
1. If T supports a codeword, then it can be seen combinatorially (minimum distance 4 and so on) that
2. If T does not support a codeword then J r)=4 as the term in x 4 z 4 must vanish
6.2 Length
This counts
The calculation of J 16;1 is consistent with [1, p.361, Table I].
Taking the linear combination
8;1 )=3 we obtain
which yields after the substitution x the coset weight enumerator
Similarly the linear combination
8;1 )=21 yields the Jacobi polynomial
which corresponds to the coset weight distribution
Combining these two equations (accounting for all the
2-designs
with parameters
This gives packing and covering designs
In fact these are indeed GDD's with parameters
because the 8 pairs in the same coset are disjoint, the minimum distance being 4:
The space of Jacobi polynomials J d16 ;T with jT may be generated by the two polynomials J 1631 =42 A 3 w
which gives the packing and covering designs
The space of Jacobi polynomials J d16 ;T with jT may be generated by the following three polynomials
z
which gives the packing and covering designs
d
d
6.3 Length 24
This counts respectively
Observe that
This is consistent with the Pascal triangle for the Witt design on 24 points [22, p.68].
In this paragraph, we call B 6 the basis of invariants for jT
The space of Jacobi polynomials JG24;T with jT generated by two polynomials. If T is not contained in
an octad, then it can be seen combinatorially (using that J C;T has the following decomposition
relatively to B 6
If we call J 2461 this polynomial, we have
Note that the coefficient of wz 5 x 15 y 3 is 6, which corresponds to the number of possible 5\Gammasets in T .
If T is contained in an octad, then J with the following decomposition
yielding
z 6 y
Note that this polynomial contains the monomial z 6 x which corresponds to an octad of the Golay.
From these polynomials, we obtain the packing and covering designs
Golay
Golay
Golay
Golay
Golay 5 NA 1 5
Golay 6 NA 2 6
6.4 Length
We focus our study on The Reed-Muller code of length 32. Observe that
From
Table
I in [9] it seems that the second order Reed Muller code in length would yield a 4-design
with 2 classes. Furthermore since the second type of coset contains exactly 8 vectors of weight 4 any such
coset define a partition of [32] into disjoint (by Hamming distance quadruples.
Consider the following two polynomials belonging to the invariant space
z 4 x
and
which correspond respectively to the two cosets of weight 4
and
14336 y 14 x
We obtain by this way a 4-design with parameters
and four other designs with parameters
From the packing and covering point of view
Note that the first covering design is the record owner in [15]. Here we know that b 4 is indeed 2. A basis of
the invariant space is given by J
The denotations of the codes and the information on a t are from [9]. CP means an extremal Type II
code of length 32.
6.5 Length 72
It is still an open problem to know if there exist a [72,36,16] binary type II code. However, its weight
enumerator can be computed by using invariant theory [17]. By theorem 1, the vectors of any given weight
in the code hold a 5-design. Theorem 4 then gives the Jacobi polynomials for jT
We have:
4397342400 x 44 y 28
4397342400 x 28 y 44
9223731055 zx 28 y 43
2119532800 wzx 43 y
43719104 w 3 zx 21 y
43719104 wz 3 x
30888000 z 5 x 44 y
z 5 y
28 y
Acknowledgements
P. Sol'e thanks Christine Bachoc and Eiichi Bannai for helpful discussions, and Michio Ozeki for sending him
[19]. A. Bonnecaze and P. Sol'e thank Allan Steel for programming help in Magma [3, 4].
--R
On the covering radius of extremal self-dual codes
Construction of Jacobi forms from certain combinatorial polynomials
The Magma algebra system I: The user language.
Magma: A new computer algebra system.
of Discr.
Extending the t
A strengthening of the Assmus-Mattson theorem
Cosets weight enumerators of the extremal self-dual binary codes of length
New Extremal Self-dual codes of Length 68
Contemporary Design Theory: a collection of surveys Wiley
The theory of Jacobi forms
Actes Congr'es International de Math'ematiques Nice
New Constructions for Covering Designs
A coding theoretic approach to extending designs
An upper bound for self-dual codes
On self-dual doubly even extremal codes
On the notion of Jacobi polynomials for codes
A classification of self-orthogonal codes over GF (2)
Introduction to the theory of error correcting codes
The theory of error-correcting codes
On the classification and enumeration of self-dual codes
Invariants of Finite Groups and their Applications to Combinatorics
Algorithms in Invariant Theory
--TR
--CTR
Christine Bachoc, On Harmonic Weight Enumerators of Binary Codes, Designs, Codes and Cryptography, v.18 n.1-3, p.11-28, December 1999
Y. Choie , P. Sol, A Gleason formula for Ozeki polynomials, Journal of Combinatorial Theory Series A, v.98 n.1, p.60-73, April 2002 | group divisible designs;type II codes;jacobi polynomials;invariant theory;packing and covering designs |
607301 | On Harmonic Weight Enumerators of Binary Codes. | We define some new polynomials associated to a linear binary code and a harmonic function of degree k. The case k=0 is the usual weight enumerator of the code. When divided by (xy)^k, they satisfy a MacWilliams type equality. When applied to certain harmonic functions constructed from Hahn polynomials, they can compute some information on the intersection numbers of the code. As an application, we classify the extremal even formally self-dual codes of length 12. | Introduction
In the theory of lattices, some modular forms play a special role, the so-called
theta series with spherical coe-cients. They are generalizations of the theta
series of the lattice which counts the number of vectors of given norm; they
are a powerful tool for the study of the spherical codes supported by the
vectors of an even unimodular lattice, as shown in [24], and also provide some
knowledge on the values of the scalar product of the vectors of the lattice
with a given vector of the Euclidean space (i.e. on the so-called Jacobi
theta series of the lattice). For example, they have allowed B. Venkov to
settle \a priori" the list of the possible root systems of an even unimodular
24-dimensional lattice [7, Chapter 18]. See [1] for a generalization of these
methods to non-unimodular lattices.
Inspired by the analogy pointed out in [8], [9] between the theory of
combinatorial and euclidean designs and their connection in both cases with
harmonic spaces, we dene here analogues of these for linear binary codes.
More precisely, we associate to a binary code C and a harmonic function f
of degree k in the sense of [8], a polynomial W C;f (x; y), which, when divided
by (xy) k , behaves, up to a sign, like the usual weight enumerator WC (x; y)
under the MacWilliams transform. In particular, when C is a doubly even
self-dual code, we get a whole set of polynomials which are relative invariants
under the usual group G 1 of 2 2-matrices of order 192 generated byp1 1
and ( 1 0
In the case of an even formally self-dual code, the group
to be considered is the subgroup G 2 generated by 1
and 1 0
, and
the polynomials (xy) k (W C;f W C ? ;f ) are relative invariants for G 2 .
In both cases, these results can be used to derive some information on
the way a given t-set T meets the codewords. In particular, we give another
proof of the fact that the words of xed weight in an extremal code (resp.
and its dual in the case formally self-dual) support \t 1
"-designs, as shown
in [6], [16].
More generally, we can derive some \invariant linear forms" in the sense
of [5] on the so-called intersection numbers:
(1)
not only in the case when jT and one has t-designs, but also for all
value of through the explicit description of the space of relative
invariant polynomials in which (xy) k (W C;f W C ? ;f ) falls. Therefore, we
specialize to certain harmonic functions associated to T , which
have the property that H k;T (u) only depends on t, juj, and ju \T j; they are
constructed from Hahn polynomials. As an example and application, we
derive a classication of the extremal even formally self-dual codes of length
12. This classication has been extended in [11], [12], where intersection
numbers play an important role.
Another method is used in [14] to derive analogous results. It involves
some other kinds of polynomials, the so-called overlap and covering polyno-
mials, which are closely connected to Ozeki's Jacobi polynomials([18]). The
author is grateful to one referee for pointing out this reference.
This paper is organized in the following way: Section 1 contains the
needed denitions and properties of harmonic functions and binary linear
codes. Section 2 contains the denition of harmonic weight enumerators
and the proof of the MacWilliams-type formula (Theorem 2.1). The consequences
on the invariance properties of these polynomials in the cases of
doubly even self-dual codes and of even formally self-dual codes are stated
in Corollaries 2.1 and 2.2. Section 3 gathers the needed results of invariant
theory. In Section 4, we reprove Assmus-Mattson theorem and Calderbank-
Delsarte strengthening of it for doubly even self-dual codes. Section 5 explains
the method based on Hahn polynomials used to compute the intersection
numbers, and Section 6 contains the classication of the extremal
even formally self-dual codes of length 12 (Theorem 6.1).
We now recall some denitions and properties of discrete harmonic func-
tions, which are developed in [8].
Let
ng be a nite set (which will be the set of coordinates
of the code C) and let X be the set of its subsets, while, for all
k is the set of its k-subsets. We denote by RX, RX k the free real vector
spaces spanned by respectively the elements of X, X k . An element of RX k
is denoted by
f(z)z (2)
and is identied with the real-valued function on X k given by z ! f(z).
The complementary set of z is denoted by z.
Such an element f 2 RX k can be extended to an element ~
setting, for all u 2 X,
~
(In the notations of [8], the restriction of ~
f to RX n is dened to be (f ).)
We may later on denote again ~
f by f . If an element g 2 RX is equal to
some ~
f , for f 2 RX k , we say that g has degree k. The dierentiation
is
the operator dened by linearity from
for all z 2 X k and for all is the kernel of
Harm k := Ker(
Concerning codes, we take the following notations: we freely identify
words of F n
2 and subsets of
the weight of an element u 2 F n
2 is also
the cardinality of its support and is denoted by wt(u) or juj. We recall
some basic notions of coding theory, for which we refer to [17], [22]; we only
consider linear codes. The weight enumerator WC (x; y) of a binary code C
is
where A i is the number of codewords of weight i and satises the MacWilliams
A code C is said to be formally self-dual if It is even if
doubly even if wt(u) 0 mod 4 for all
codes are even and formally self-dual, while the converse
is not true; see [16], [22] for examples. If a formally self-dual code is in
addition doubly even, then it is necessarily self-dual. From the facts that
the polynomial WC is invariant under the group G 1 in the self-dual doubly
even case (resp. under G 2 in the even formally self-dual case), one deduces
the inequalities for the minimal weight d(C) of C: d(C) 4([n=24]
(respectively meeting these bounds is said to
be extremal; its weight enumerator is then uniquely determined.
Harmonic weight enumerators
In this section, we dene the harmonic weight enumerators associated to a
binary linear code C and prove a MacWilliams type equality.
Denition 2.1 Let C be a binary code of length n and let f 2 Harm k . The
harmonic weight enumerator associated to C and f is
~
Theorem 2.1 Let WC;f (x; y) be the harmonic weight enumerator associated
to the code C and the harmonic function f of degree k. Then
where Z C;f is a homogeneous polynomial of degree n 2k, and satises
x y
Proof. Like in the classical case of MacWilliams formula for weight enu-
merators, the proof relies on Poisson summation formula, which we recall
here:
Theorem 2.2 (Poisson summation formula) Let : F n
R be a function
taking its values into a ring R, and let ^
be its Fourier transform,
dened by
(v) :=
Then, for all linear code C F n
(v) (12)
We shall apply Poisson formula to each term of Z C;f , namely to:
Therefore, we compute the Fourier transform of , rst in the case
2.2), and in the general case but for harmonic functions in
Lemma 2.3. In order to prove that the ZC;f are actually polynomials, we
start with a technical lemma on harmonic functions.
Lemma 2.1 Let f 2 Harm k and v 2 F n
. Let
f (i) (v) :=
Then, for all 0 i k, f (i)
~
f(v).
Proof. For all 0 i k 1,
which means (from (4)) that
tz
The evaluation at v 2 F n
is:
tz
The proof then follows by induction on k i, since clearly f
and the previous equality implies
~
We can notice now that, for all u such that wt(u) < k, from denition (3)
of ~
f , ~
0, and from Lemma 2.1, ~
is a polynomial. We now compute the Fourier transform of (see (13)).
Lemma 2.2 Let
Proof.
(v) :=
We can write runs through F n k
and are then reduced to the usual
formula for the Fourier transform of x n k wt(v\z) y wt(v\z) .
We now consider the case of a harmonic function of degree k and prove
Lemma 2.3 Let f 2 Harm k . Then
Proof. Since
f(z)z, and from Lemma 2.2,
To conclude for Lemma 2.3, we need another last lemma:
Lemma 2.4 Let f 2 Harm k . Then, for all v 2 F n
Proof. Let B s be the coe-cient of x k s y s in this polynomial. We must
show that B
We sum over with the notations of Lemma 2.1, it is equal
to
j+l=s
l
j+l=s
l
i;j;l
j+l=s
j;l;t;r
j+l=s
where the last equality is the computation of the coe-cient of x s in the
specialization of
Theorem 2.1 now follows from Lemma 2.3 and the Poisson summation formula
(12).
In the special case of doubly even self-dual codes, an immediate consequence
of Theorem 2.1 is that the polynomials Z C;f are relative invariants
for the group G 2 . This result is stated in Corollary 2.1, and an analogous
result for even formally self-dual codes is stated in Corollary 2.2.
We take the following notations:
We consider the group G 1 =< together with the characters k
dened by:
and the group G 2 =< together with the characters
dened
by
Corollary 2.1 If C is a self-dual, doubly even code of length n, for all
, the polynomial Z C;f (x; y) satises
Z C;f (A(x;
for all matrix A 2 G 1 .
Corollary 2.2 If C is an even formally self-dual code of length n, for all
, the polynomials Z C;f Z C ? ;f satisfy
for all matrix A 2 G 2 .
3 Some invariant theory
We gather here some well-known results of invariant theory that will be
of further use. We denote by C [x the polynomial algebra in n
variables, together with the left action of the algebra M n (C ) of nn complex
matrices given by (M:P
the transposition).
If G is a subgroup of M n (C ), we denote by IG the algebra of invariants
of G, namely
If is a character of G, the space of relative invariants with respect to
is
I G; = fP
It is clearly a module over IG . In view of our situation, we need to compute
I
, for the characters k dened in (15). It is well-known to be, in the
case k 0 mod 4, the polynomial algebra C [P
. The other cases are probably also
very classical, but we recall the result:
Lemma 3.1
I
Proof. The dimension a ;d of (I G; ) d , the homogeneous component of
degree d of (I G; ) is computed by Molien's series:
a ;d X
In the case of the group G 1 , and for the characters k given by (15), we
nd respectively 1=((1 X 8 )(1 X 24 )), X
It is easy to verify that the
polynomials announced in the lemma do belong to the spaces I G1 ; k
; the
result then follows from the equality of the dimensions.
The case of the group G 2 goes the same; we have I
and the I G 2 ;
for the characters
(16), (17), are principal ideals. Clearly these characters only depend on k
mod 2.
Lemma 3.2
I
R 4 I G2 if
4 New proofs of some classical results
In this section, we recover the classical results on t-designs supported by
words of binary linear codes, using the harmonic weight enumerators previously
dened, and the characterization of designs in terms of the harmonic
spaces given in [8]: a set B of blocks is a t-design if and only if
~
for all f 2 Harm k , 1 k t. Hence, the set of words of xed weight in
a code C form a t-design if and only if W C;f (x;
t.
We start with Assmus-Mattson theorem:
Theorem 4.1 (Assmus-Mattson) Let C be a binary code of length n and
distance d, and let C ? be its dual, of distance e. If t d is such that the
number of non zero weights of C ? which are lower or equal to n t, is at
most d t, then the set of codewords of C (respectively C ? ) of xed weight
w form a t-design, for d w n (respectively e w n t).
Proof. Let f 2 Harm k , 1 k t. Write A i;f :=
~
f(u) and
~
f(u). We want to prove that, for all i (rep. i n
Theorem 2.1 translates, in terms of these, into:
kin k
for all j, k j n k, where the P (n 2k)
are the Krawtchouck polynomials
([17, Chap 5]). Since C has distance d, we have A
which leads to d k independent equations in the B i;f , k i n k. By
hypothesis, there are at most d k unknowns and hence the only solution
is trivial . Hence B t.
Now the n 2k
equations in the A i;f , d i n d, using equations (18) applied to C
since k d and the equations are independent, the only solution is trivial.
In the case of extremal doubly even self-dual codes, we can prove the
result directly from the description of the relative invariants of the group G 1 ,
avoiding the use of Krawtchouck polynomials; moreover, the extra property
that the t-designs are \t 1"-designs (which was shown rst by B. Venkov by
means of spherical theta series, then in [6] in a combinatorial setting) follows
easily, and is very similar to the initial proof of B.Venkov [24] concerning
the spherical designs in extremal even unimodular lattices. We recall the
slightly more general denition of the notion of a T -design, for a subset T of
ng: a set B of blocks is called a T -design if and only if
~
0 for all f 2 Harm k and for all k 2 T . Hence a t-design is a
design.
Theorem 4.2 ([5]) Let C be an extremal self-dual doubly even code of
length n.
If n 0 mod 24, the codewords of xed weight in C form a
If n 8 mod 24, the codewords of xed weight in C form a
If n 16 mod 24, the codewords of xed weight in C form a
Proof. Let the extremality of C means that
We prove that WC;f (x; the
other cases being similar. From Theorem 2.1 and Lemma 3.1, for all f 2
Harm k , W C;f (x;
Since the valuation at y of Q, (i.e. the least power of y in Q) is 4(m+1) k 3,
We compute the degree of
this polynomial is non zero, it has degree n 2k
Notice that, if the polynomial Q 0 is determined up to a scalar: it is
proportional to 1 if respectively to P 8 if 8.
Remark 4.1 With the same method, we can recover the results of [16] on
the designs supported by codewords of xed weight in C [C ? , when C is an
extremal even formally self-dual code. We omit the proof.
5 Harmonic weight enumerators and the computation
of Jacobi polynomials
In this section, we show how harmonic weight enumerators can be used to
compute Jacobi polynomials. We rst recall the denition of these: Let C
be a binary code of length n and T ng.
JC;T (v; z; x; y) :=
where, for is the number of coordinates of
u\T (respectively of u\T ) equal to i. They have been introduced by Ozeki
[18] in analogy with Jacobi forms of lattices, and studied by A. Bonnecaze,
P. Sole et al. [2], [3], [4] in the case of type II binary and Z 4 -codes. In
particular they point out the following characterization of codes supporting
designs: the set of codewords of a code C form a t-design for every xed
weight, if and only if the Jacobi polynomial JC;T for a t-set T is independent
of T .
Since we can also characterize this property of a code C by the set of
conditions:
a natural question is: how can one compute JC;T for a t-set T given in (19)
from the set of conditions (20)? The answer lies in the fact that one can
attach to every t-set T some harmonic functions H k;T of degree k, 1 k t;
the values H k;T (u) are expressed in terms of Hahn polynomials, and only
depend on juj and ju \ T j. They are described in [8] as the orthogonal
projection of T 2 RX t over Harm k . In view of our applications, we need to
generalize [8, Theorem 5] to the case of subsets of non equal cardinality. For
the denition and properties of Hahn polynomials, we refer to [15].
Proposition 5.1 [8, Theorem 5] Let T be a t-subset of ng. For all
for all t-set u, where ([15]) Q t
are
orthogonal Hahn polynomials. Then H k;T 2 Harm k .
Proof. In the notations of [8], H k;T
Proposition 5.2 With the same hypothesis, as an element of RX, the
H k;T (u) for all subsets u of ng only depend on
We set H k;T
I
where
Proof. From [8, Theorem 3] applied to
have H k;T
zx
jx \ uj
which leads to the announced formula by setting
Remark: The same argumentation as in [8] applied to H k;T
for juj > t show that they are also linked to Hahn polynomials but for the
parameters (with the notations of [15]): Q k (x; juj n
From (1) and (19), the numbers nw;i (T ) are the coe-cients of JC;T :
On the other hand, the harmonic weight enumerators WC;H k;T
have the
following
WC;H k;T
f
Hence the set of equations (20) for leads for every w to
the following t linear equations in the t
We denote by Cw the set of codewords of weight w. Its cardinality is is
equal to the coe-cient Aw of the weight enumerator of the code C, dened
by (6). Then, another equation, corresponding to the degree 0 case, is:
For all w, the nw;i (T ) are the solutions of the system of equations (29),
(30).
Remark: In the cases when the polynomials ZC;H k;T
are invariant polyno-
mials, i.e. in the cases of doubly even self-dual codes or of even formally
self-dual codes, we can more generally get some information on the nw;i (T ),
not only when the codewords support t-designs, in the following way: a condition
of the type Z C;f 2 I G; , joined with the knowledge of d(C), says that
Z C;f sits in a nite-dimensional vector space, which is explicitly described.
Hence this information can be turned into linear equations in the nw;i (T ).
Of course, the smaller this dimension is, the more equations we get, and the
case when the codewords support designs is the 0-dimensional case. The
higher d(C) is, the smaller are these dimensions, the most interesting cases
being the extremal codes. An example of this method is treated in next
section.
6 A classication result
In this section, we classify, with the help of harmonic weight enumerators,
the extremal even formally self-dual codes of length 12. These codes have
weight 4 and their weight enumerator is
There is a unique code which is self-dual; it is the code B 12 with component
d 12 of [19], [20]; we nd two other codes which are isodual, one of them
is described in [22, Chap.3]. They both appear in [13] as double circulant
codes.
First step in this classication result is the computation of the
w;i (T )g 0it (see (1)). We rst show that, if T is a word of
C of weight 4 or 6, there are only two solutions for fnw;i (T ); n
therefore, we use the results of the previous section to derive some equations
satised by these numbers.
Lemma 6.1 Let C be an extremal even formally self-dual code of length
12 and let T 2 C, of weight 4 or 6. There are only two possibilities for
w;i (T )g 0it , which are given in the following tables:
If wt(T
If wt(T
Proof. We rst make some easy remarks: since
odd. Moreover, since the all-one word 1 belongs to C \C ? ,
From (31), we have:
and
4. Some of the entries are easily computed from the hypothesis
on the code C: n 4;3 (T otherwise the sum with T
would be a weight 2 word in C, and clearly n 4;4 (T Taking into account
the equations (32) and (33), we are reduced to the set of six unknowns:
)g. We now consider
the harmonic weight enumerators WC;H k;T
dened in the previous section.
From Corollary 2.2 and Lemma 3.2, ZC;H 1;T
but, since C and C ? have weight 4, ZC;H 1;T
must be a multiple
of (xy) 3 , and hence of Q 8 P 0
8 . This last polynomial is of degree
while ZC;H 1;T
is of degree 10; hence it is zero. A similar discussion
shows that ZC;H 1;T
We derive the
following equations:
which, in terms of our six unknowns, the coe-cients h k;4 (w; i) being computed
from equation (), lead to:
Since we look for positive integral solutions, we see from the rst two
equations of (35) that the only possibilities are n
the two announced solutions. Clearly, n
depending whether T
belongs to C ? or not.
arguments lead to the result.
Theorem 6.1 There are exactly three extremal formally self-dual codes with
even weights of length 12; one is the unique self-dual code B 12 and the two
others are given by the following generator matrices:
Permutation group of order 384
C (2)
C (2)
Permutation group of order 120
Proof. Let C be such a code. Let T 6 2 C be a word of weight 6, not
belonging to C ? . From Lemma 6.1 we know that n
is a unique word u 4 of weight 4 in C ? whose support is contained in T 6 .
belong to C because 2. On the other
hand, since n
we see that each such u 4 is associated to exactly
two weight 6 words of C (we have reversed the roles of C and C ? in Lemma
6.1). Hence the number of weight 6 words in C but not in C ? is at most
there is at least one pair of words of
weight 6 belonging to C \ C ? .
Each of the words of weight 4 in C intersects T in two positions, which
are never the same, otherwise the sum of two such words with 1+T would be
a weight 2 word in C. Hence there is a one-to-one correspondence between
the 15 elements in C 4 and the 2-subsets of T (respectively of 1+T ).
We denote them by u
Let u be a xed weight 4 word in C. Up to permutation, we can assume
that T , u are in the following position:
We assume rst that u 2 C \ C ? . From Lemma 6.1, there are 8 words
meeting u in two positions; since t(u 0
are four possibilities for u \ Assume one of
them appears at least three times, say the rst one and for
Again because t and t are bijections, there is up to permutation only one
possibility:
generates the self-dual code B 12 ([19],[20]). We can
next assume that the eight u 0 reach exactly twice the four possibilities for
Again for the same argument, there is up to permutation only one
possibility:
and now
generates the code C (1)
12 .
The last case to consider is the case when no weight 4 word in C belongs
to C ? . Hence C \ C ? is the 2-dimensional code generated by T and 1.
From Lemma 6.1, we know that eight words of weight 4 in C meet u in one
position. Then, at least one position is reached at least twice, say by
must share another
position outside u. Up to permutation, they are in the following positions:
If a third word u 4 meets u again in the same position as u 2 and u 3 ,
this is also true for the other pairs but then either
or u 2 +u 3 +u 4 +T +1 has weight 2, which is not possible.
Hence each position in u corresponds to a pair of weight 4 words in C
intersecting at that position. From the previous discussion, the sum is a
weight 4 word which is disjoint from u; there are exactly two such words
since n 4;0 and they are necessarily disjoint (if w is one of them, the
other is w corresponds to
and let be such that We have two choices up to
permutation for the common position of u; u can be (on u) either
1000 or 0010. But it is easy to see that the rst one is not possible under
the condition that t, t are bijective and that the second one leads to only
one possibility:
In that case, f1; generate the code C (2)
12 .
Since we nd up to permutation two codes, which are distinguished by
the dimension of C\C ? , and since the dual of an extremal even formally self-dual
code is again an extremal formally self-dual code with even weights,
these codes are necessarily equivalent to their duals. The automorphism
groups have been computed with Magma.
Remark 6.1 By \construction A", these codes construct non-isometric lattices
which are 4-modular and extremal in the sense of H.-G. Quebbemann
[21].
--R
lattices and spherical designs preprint
On error-correcting codes and invariant linear forms SIAM J
A strengthening of the Assmus-Mattson theorem IEEE Trans
Spherical codes and designs Geom.
Overlap and covering polynomials with applications to designs and self-dual codes
The
On designs and formally self-dual codes De- signs
On the notion of Jacobi polynomial for codes Math.
A classi
On the classi
A shadow identity and an application to isoduality Abh.
Handbook of Coding Theory
Even unimodular extremal lattices
--TR
--CTR
Koichi Betsumiya , Masaaki Harada, Classification of Formally Self-Dual Even Codes of Lengths up to 16, Designs, Codes and Cryptography, v.23 n.3, p.325-332, August 2001
Koichi Betsumiya , Masaaki Harada, Binary Optimal Odd Formally Self-Dual Codes, Designs, Codes and Cryptography, v.23 n.1, p.11-22, May 2001
J. E. Fields , P. Gaborit , W. C. Huffman , V. Pless, On the Classification of Extremal Even Formally Self-DualCodes, Designs, Codes and Cryptography, v.18 n.1-3, p.125-148, December 1999
Christine Bachoc , Philippe Gaborit, Designs and self-dual codes with long shadows, Journal of Combinatorial Theory Series A, v.105 n.1, p.15-34, January 2004
Olgica Milenkovic, Support Weight Enumerators and Coset Weight Distributions of Isodual Codes, Designs, Codes and Cryptography, v.35 n.1, p.81-109, April 2005
Kenichiro Tanabe, A Criterion for Designs in
{\tf="P101461" Z}_4
David Masson, Designs and Representation of the Symmetric Group, Designs, Codes and Cryptography, v.28 n.3, p.283-302, April | codes;formally self-dual codes;harmonic functions;weight enumerator |
607562 | Factored Edge-Valued Binary Decision Diagrams. | Factored Edge-Valued Binary Decision Diagrams form an extension to Edge-Valued Binary Decision Diagrams. By associating both an additive and a multiplicative weight with the edges, FEVBDDs can be used to represent a wider range of functions concisely. As a result, the computational complexity for certain operations can be significantly reduced compared to EVBDDs. Additionally, the introduction of multiplicative edge weights allows us to directly represent the so-called complement edges which are used in OBDDs, thus providing a one to one mapping of all OBDDs to FEVBDDs. Applications such as integer linear programming and logic verification that have been proposed for EVBDDs also benefit from the extension. We also present a complete matrix package based on FEVBDDs and apply the package to the problem of solving the Chapman-Kolmogorov equations. | Introduction
Over the past decade a drastic increase in the integration of VLSI chips has taken place. Conse-
quently, the complexity of the circuit designs has risen dramatically so that today's circuit designers
rely more and more on sophisticated computer-aided design (CAD) tools. The goal of CAD tools
is to automatically transform a description in the algorithmic or behavioral domains to one in the
physical domain, i.e. down to a layout mask for chip production. We divide this process into four
different levels: system, behavioral, logic and layout.
At the logic level, the behavior of the circuit is described by boolean functions. The efficiency
of the algorithms applied in this level depends largely on the chosen data structure. Originally,
representations such as the sum of products form or factored form representations were predominant.
Today, the most popular data structure for boolean functions is the Ordered Binary Decision
Diagram (OBDD) which provides a compact and canonical representation. In the wake of the
successful introduction of the concept of function graphs by OBDDs, various other function graphs
have been proposed which are not constrained to boolean functions but can be used to denote
arithmetic functions. These function graphs have been used for state reduction in finite state
machines and logic verification of higher-level specifications. Additionally, they have been applied
to problems outside CAD, such as integer linear programming and matrix representation.
Since the introduction of OBDDs by R. E. Bryant [5], several different forms of function
graphs have been proposed. Functional Decision Diagrams (FDD) have been presented as an
alternative to OBDDs for representing boolean functions [3]. Ordered Kronecker Functional
Decision Diagrams (OKFDD) have been introduced in [10] as a generalization of OBDDs and
FDDs. Multi-Terminal Binary Decision Diagrams (MTBDD) [9] have been proposed to represent
integer valued functions and extended to functions on finite sets [2]. Edge-Valued Binary Decision
Diagrams (EVBDD) [12][13][14] provide a more compact means of representing such functions.
Recently Binary Moment Diagrams (BMD and *BMD) [7] were introduced which permit efficient
word-level verification of arithmetic functions (including multipliers of up to 62-bit word size).
This paper presents Factored Edge-Valued Binary Decision Diagrams (FEVBDD) as an extension
to EVBDDs. By associating both an additive and a multiplicative weight with the edges,
FEVBDDs can be used to represent a wider range of functions concisely. As a result, the computational
complexity for certain operations can be significantly reduced compared to EVBDDs.
Additionally, the introduction of multiplicative edge weights allows us to directly represent the
complement edges which are used in OBDDs. This paper also describes uses of FEVBDDs in
applications such as integer linear programming, logic verification and matrix representation and
manipulation.
2 Review of Edge-Valued Binary Decision Diagrams
Edge-Valued Binary Decision Diagrams, which were proposed by Lai, et al. [12][13][14] offer
a direct extension to the concept of OBDDs. By associating a so-called edge value ev to every
then-edge of the OBDD they are capable of representing pseudo-boolean functions such as integer
valued functions. Their application has proven successful in such areas as formal verification and
integer linear programming, spectral transformation, and function decomposition.
Definition 2.1 An EVBDD is a tuple hc; fi where c is a constant value and f is a rooted, directed
acyclic graph E) consisting of two types of vertices.
ffl A nonterminal vertex f 2 V is represented by a quadruple
child t (f); child e (f); evi, where variable(f) 2 fx is a binary variable
ffl The single terminal vertex f 2 T with value 0 is denoted by 0.
There is no nonterminal vertex f such that child t child e (f) and ev = 0, and there are no
two nonterminal vertices f and g such that g. Furthermore, there exists an index function
such that the following holds for every nonterminal vertex. If child t (f)
is also nonterminal, then we must have
child e (f) is nonterminal, then we must have
Definition 2.2 An EVBDD hc; fi denotes the arithmetic function c
f is the function f denoted by evi. The terminal node 0 represents the constant
denotes the arithmetic function
Definitions (2.1), (2.2) provide a graphical representation of pseudo-boolean functions. As a
consequence integer variables have to be encoded in binary as in
is a n-bit integer variable. It has been shown that EVBDDs form a canonical representation of
functions.
Definition 2.3 Given an EVBDD hc; fi representing f(x function F that for
each variable x assigns a value F(x) equal to either 0 or 1, the function EVBDDeval is defined as
c f is the terminal node 0
child e (f)i; F)
boolean arithmetic
Table
1: Arithmetic equivalents of boolean functions
Boolean functions can be represented in EVBDDs by using the integers 0 and 1 to denote
the boolean values true and false. Boolean operations are implemented through arithmetic
operations as shown in Table 1. A method has been described by Lai, et al. that converts any
OBDD representation of a boolean function to its corresponding EVBDD representation. It can
be proven that both function graphs OBDD v and EVBDD denoting the same function f
share the same topology except that the terminal node 1 is absent from the EVBDD and the edges
connected to it are redirected to the single terminal node 0. Additionally, it was shown that boolean
operations executed on EVBDDs have the same time complexity O(jf j \Delta jgj) as boolean operations
on OBDDs. The concept of complement edges can not be realized in EVBDDs.
As has been done for OBDDs, a generic operation apply can be defined that implements
arbitrary arithmetic operations on the EVBDD representations of two arithmetic
functions f and g. In general, the time complexity of such an operation on two EVBDDs
and
the flattened EVBDDs of respectively. A flattened EVBDD is defined in exactly the
same manner as an MTBDD. For operations such as addition, subtraction, scalar-multiplication, etc.
the time complexity of apply can be drastically reduced by exploiting certain properties. A scalar
multiplication c \Delta f) can be done with time complexity O(jhc f ; f ij) by simply multiplying all
edge values by c. All operations op, such as addition, that fulfill the additive property
have the reduced time complexity O(jhc f
Based on EVBDDs, the concept of structured EVBDDs (SEVBDDs) has been developed in
[14]. SEVBDDs allow the modeling of conditional expressions and vectors. Their main use lies in
the field of formal verification.
3 Factored Edge-Valued Binary Decision Diagrams
Factored Edge-Valued Binary Decision Diagrams (FEVBDD) are an extension to EVBDDs. By
associating both an additive and a multiplicative weight with the true-edges 1 FEVBDDs offer a
more compact representation of linear functions, since common subfunctions differing only by an
affine transformation can now be expressed by a single subgraph. Additionally, they allow the
notion of complement edges to be transferred from OBDDs to FEVBDDs.
Definition 3.1 An FEVBDD is a tuple hc; w; f; rulei where c and w are constant values, f is a
rooted, directed acyclic graph E) consisting of two types of vertices, and rule is the set of
weight normalizing rules applied to the graph.
ffl A nonterminal vertex f 2 V is represented by a 6-tuple 2
child t (f); child e (f); ev; w is a binary
variable.
ffl The single terminal vertex f 2 T with value 0 is denoted by 0. By definition all branches
leading to 0 have an associated weight
There is no nonterminal vertex f such that child t
there are no two nonterminal vertices f and g such that g. Furthermore, there exists an index
function such that the following holds for every nonterminal vertex. If
child t (f) is also nonterminal, then we must have
If child e (f) is nonterminal, then we must have
Definition 3.2 A FEVBDD denotes the arithmetic function c f
is the function f denoted by i. The terminal node 0 represents the constant
denotes the arithmetic function
Definition 3.3 Given a FEVBDD representing
that for each variable x assigns a value F(x) equal to either 0 or 1, the function FEVBDDeval is
defined as:
1 The GCD rule requires also a multiplicative weight to be associated with the else-edges.
2 If we use the rational rule it holds that w nodes. Thus we can represent a nonterminal vertex by a
5-tuple hvariable(f); child t (f); child e (f); ev; w t
c f f is the terminal node 0
Figure
As an example, we construct the various function graphs based on the different decompositions
of function f given in its tabular form in Figure 1.
(2)
9 +3 (y(3 +2 z)
Equation (2) is in a form that directly corresponds to the function decomposition for MTBDDs or
ADDs and the tabular form. Equations (3) and (4) reflect the structure of the decomposition rules
for EVBDDs and FEVBDDs, respectively. The different function graphs are shown in Figure 1.
Figure
goes here.
Figure
3 goes here.
Figure
4 goes here.
Representations of signed integers based on FEVBDDs are presented in Figure 2 and representations
of word-level sum and product are given in Figures 3 and 4.
Lemma 3.1 Given two FEVBDDs which have been generated
using the same weight normalizing rule and with f and g being non-isomorphic, it holds
that there exists an assignment F 2 f0; 1g n such that c f +w f \Delta f 6= c g +w g \Delta g for this assignment.
Proof:
Case 1: if c f 6= c g then let
.
Case 2: c by the definition of non-isomorphism it holds that 9F such that
Consequently, we have
that for this assignment F.
Case 3: c we assume that it holds that f and g are non-isomorphic and that
for all assignments F. This implies that w f
g.
Consequently, f and g are isomorphic which contradicts the original assumption. Thus, it
holds that 9F such that c f g. 2
Theorem 3.1 Two FEVBDDs that have been generated
using the same weight normalizing rule, i.e. rule , denote the same function, i.e.
only if c , and f and g are isomorphic.
Proof:
Sufficiency: If c and f and g are isomorphic, then 8F,
directly from
the definitions of isomorphism and FEVBDDeval.
Necessity: If c f 6= c g then let holds that FEVBDDeval(hc f
then let F be an arbitrary assignment
such that FEVBDDeval(h0;
it holds that FEVBDDeval(hc f
and g are isomorphic
then it holds by the definition of isomorphism and FEVBDDeval that
It follows that c f +w f \Delta
val 6= c g +w g \Delta val. If f and g are non-isomorphic lemma 3.1 holds. Nowwe have to prove the lemma
for the last condition f being isomorphic to g. We need to show that if f and g are not isomorphic,
then 9F 2 f0; 1g n such that FEVBDDeval(h0;
Without loss of generality, we assume index(variable(f)) - index(variable(g)). Let
prove the lemma by induction on k.
Base: If and g are terminal nodes. Furthermore, and g are
isomorphic.
Induction hypothesis: Assume the above holds for
Induction: We show that the hypothesis holds for
i.
Case 1:
If ev f 6= ev g then let F(x n\Gammak it holds that
ev g and w t f
then let F be an arbitrary assignment such that F(x n\Gammak
holds that FEVBDDeval(h0;
\DeltaFEVBDDeval(h0;
and g t
are isomorphic it holds that FEVBDDeval(h0;
val. Thus we have that ev f +w t f
and g t
are nonisomorphic then lemma 3.1 is applicable. Almost the identical prove can
be given for ev
, and w e f
and
, or f e
and g e
are nonisomorphic.
Subcase 1: If f t
and g t
are nonisomorphic, then from
and the induction hypothesis, we see that there exists some F
such that FEVBDDeval(h0;
let F 0 be defined as F 0
Subcase 2: Otherwise f e
and g e
are nonisomorphic, then by similar arguments, letting
By definition of a reduced FEVBDD, we cannot have ev
being isomorphic to f e
. If ev f 6= 0, let F(x n\Gammak
then FEVBDDeval(h0;
g is independent of the first n \Gamma k bits. If ev
then let F be
an assignment such that F(x n\Gammak
index(variable(g)))). Furthermore, let F be such that FEVBDDeval(h0;
val f 6= 0 and FEVBDDeval(h0; If the corresponding subgraph of
f with top-variable x n\Gammak and g are isomorphic then it holds that val
val g . If the graphs
are non-isomorphic we can apply the same reasoning as we did in the proof of lemma 3.1. Oth-
erwise, f t
and f e
are non-isomorphic and at least one of them is not isomorphic to g. If f t
and
are non-isomorphic, then by induction hypothesis, there exists an assignment F such that
1 and F 0 It holds that FEVBDDeval(h0;
As shown above, FEVBDDs form a canonical representation of a function only for specific
weight normalizing rules that uniquely determine how the node weight of a new node is computed
based on its both descendants. We propose two basic rules that can be used to guarantee canonicity
for FEVBDDs. Given two FEVBDDs
rule the node weight w of hc; w; f ; rulei is computed as follows
1. GCD rule:
2. RATIONAL rule:
make new node(x i ,hc
f
/* compute the new weights */
/* guarantee uniqueness */
return
Table
2: Make New Node
These weight normalizing rules (cf. Table are applied whenever a new node is generated using
the make new node routine. (cf. Table 2). This routine enforces both the canonicity of the function
graph as well as its uniqueness.
The routine find or add preserves the uniqueness of all nodes. Before a new node is actually
created a quick hash table lookup is performed and, if the node is already a member of the table,
the stored node with its unique ID is returned. Otherwise, a new node entry in the hash table is
created and the new node with its unique ID is returned. Thus it is guaranteed that every node is
stored only once in the hash table.
Although the GCD rule requires a multiplicative weight to be associated with both the true- and
the else-edges, there are some cases where it might be the rule of choice. If the function range is
purely integer the GCD rule avoids dealing with fractions. This is particularly valuable, since all
arithmetic operations on fractions are significantly more time consuming than the built in hardware
routines for integers. Furthermore, the restriction to integers by use of the GCD rule brings a clear
advantage in memory efficiency. Even though we need to store an additional weight, the memory
consumption per node is less than when using the rational rule which requires the use of fractions.
This is because every fraction is internally represented as one integer for the numerator and one
norm weight(ev; w
f
case 'GCD':
else if(w T 6=
else
return(sign
case 'RATIONAL':
return
else if(w T 6=
return
else return ev;
break;
Table
3: Norm Weight
for the denominator. Of course, as soon as the application requires the use of fractions the rational
rule should be preferred. Nevertheless, the GCD rule is still applicable since we define:
gcd( u
3.1 Operations
As has been done for OBDDs [5] and EVBDDs [14], we provide a generic algorithm apply that
implements arbitrary arithmetic operations on two FEVBDDs (cf. Table 4). Apply takes two
FEVBDDs rule g i, as well as an operation op as its arguments.
Both FEVBDDs have to be based on the same weight normalizing rule. The algorithm recursively
branches at the top variable, i.e. the variable with the least index in f or g until it reaches a terminal
case. Terminal cases depend on the operation op; as an example, for op='+' we have the terminal
case
The computational efficiency of this algorithm can be improved significantly by taking advantage
of a computation cache. Before the recursive process is started, a quick lookup in the
computation cache is performed and if successful, then the result of op is returned immediately
without further computation. The entries of the cache are uniquely identified by a key consisting
of the operands and the operation op. Whenever a new
result is computed it is stored in the computation cache. In general the complexity of operations
performed by apply is O(khc ik).
As mentioned before we can further improve the computational complexity of apply by making
use of properties of specific operations. We adapt the concept of an additive property proposed for
EVBDDs by Lai, et al., [14] and extend it to the so-called affine property for FEVBDDs.
Definition 3.4 An operator op applied to is said to satisfy
the affine property if
The factor w is defined as can be of arbitrary value. 3
3 Similar to the rational rule we can alternatively define the affine property as follows:
All the benefits of the affine property remain the same.
/* check for a terminal case */
if(terminal
return
/* is the result of op already available in the computation cache */
if(comp table lookup(hc rule g i,op,hc ans ; w ans ; ans; rule ans i))
return (hc ans ; w ans ; ans; rule ans i);
/* perform the recursive computation of op*/
child t (g); rulei;
else f
child t (f); rulei;
child e (f); rulei;
else f
ge
/* store the result in the computation cache */
comp table insert(hc
return
Table
4: Apply
Operations that satisfy the affine property are addition, subtraction, scalar multiplication and logical
bit shifting. The main advantage of the affine property lies in reducing the computational complexity
of apply. Since we can separately compute the parts of the result generated by the constants c f and
c g and by the two subgraphs h0; w rulei, the hit ratio of the computation
cache can be drastically increased by separating the influence of the constants and always storing
only the results for This concept is applied to every recursion step so that the constant
value is never passed down to the next recursion level. Unfortunately, we still have to pass the
multiplicative weights w f and w g since they cannot be separated from the functions f and g. To
achieve a further improvement in the hit ratio, we extract the common divisor w from w f and w g and
promote only w 0
f and w 0
g . This is an advantage in such cases as reducing the problem of performing
to the already computed problem (0
quantify the influence of the GCD extraction the worst case computational complexity for operations
satisfying the affine property is given as O(jhc f
the EVBDDs corresponding to the FEVBDDs respectively.
Scalar multiplication and logical-bit shifting offer a better computational complexity since they
can be computed in time independent of the size of the function graph. Scalar multiplication only
requires the weights of the root node to be multiplied. In the case of EVBDDs we have to multiply
every edge weight with the scalar; a task of complexity O(jf j).
Since multiplication does not satisfy the affine property we are basically required to use the
original version of apply. For the multiplication of two functions that both have a high percentage
of reconverging branches, the following approach tends to improve the cache efficiency:
We now have only O(jhc calls to multiply but every call requires
three calls to apply for adding the separate terms. The first addition is not costly since the first term
is always a constant, however, the second and third addition are potentially costly.
In addition to the additive property, two further properties - the bounding property and the
domain-reducing property - have been introduced by Lai, et al. [14] [12]. As has been done for
the additive property, these properties can be easily adapted to FEVBDDs.
3.2 Representation of Boolean Functions
Boolean Functions are represented in FEVBDDs by encoding the boolean values true and false
as integers 1 and 0, respectively. All the basic boolean operations can be easily represented using
only arithmetic operations. Thus we can easily represent any boolean function using FEVBDDs.
Although we could implement the boolean operations based on their corresponding arithmetic
functions, it is by far better in terms of computational complexity to directly use apply for boolean
operations. All we need to do is to provide the necessary terminal cases for apply(hc
boolean op). In the case of the boolean conjunction operation for example the
terminal cases are:
1.
2.
3. if(hc
To convert a boolean function from its OBDD to its FEVBDD representation we can adapt the
algorithm suggested by Lai in [14]. Additionally, the concept of multiplicative weights allows us
to directly represent the so called complement edges, so that we need to take care of this case in
the algorithm:
1. convert the terminal node 0 to h0; 0; 0; rulei and 1 to h1; 0; 0; rulei.
2. for each nonterminal node hx i ; t; ei in the OBDD such that t and e have already been converted
to FEVBDDs as the following conversion rules
are applied:
3. if the branch leading fromnode hx i ; t; ei to t or e is a complement edge we have to perform the
complementation by computing e, respectively. This is achieved by multiplying
both weights c t (c e ) and w t (w e ) by \Gamma1 and later adding 1 to c t (c e ). The four basic conversion
rules are listed below:
The above conversion rules are not complete in the case of FEVBDDs since we can now
also have variations in the multiplicative weights which can either be +1 or \Gamma1. These cases
however are handled exactly according to the norm weighting rule that has been presented
before, so that we do not explicitly list them here.
As it has been done for EVBDDs [14], it can be shown that the following theorems hold.
Theorem 3.2 Given an OBDD representation v of a boolean function with complement edges
being allowed and an FEVBDD have the same topology except that
the terminal node 1 is absent from the FEVBDD v 0 and the edges connected to it are redirected to
the terminal node 0.
Theorem 3.3 Given two OBDDs f and g with complement edges being allowed and the corresponding
FEVBDDs time complexity of boolean
operations on FEVBDDs (using apply) is O(jfj \Delta
An example of a FEVBDD representing a boolean function with complement edges is given
in
Figure
5. This FEVBDD represents the four output functions of a 3-bit adder. It has the same
topology (except for the terminal edges) as the corresponding OBDD depicted in the same figure.
As it is shown in this example, FEVBDDs successfully extend the use of EVBDDs to represent
boolean functions as they inherently offer a way to represent complement edges. Furthermore, the
boolean operation 'not' can now be performed in constant time since it only requires manipulation
of the weights of the root node.
Figure
5 goes here.
3.3 Logic Verification
The purpose of logic verification is to formally prove that the actual implementation satisfies the
conditions defined by the specification. This is done by formally showing the equivalence between
the combinational circuit, i.e. the description of the design and the specification of the intended
behaviour.
In general, the implementation is represented by an array of boolean functions f b and the
specification is given by a word-level function fw . In order to transform the bit-level representation
to the word-level we can use any encoding function to encode the binary input signals to the circuit.
The set of input signals is partitioned into several subsets of binary signals x every
array x i is then encoded using an encoding function encode i that provides a word-level interpretation
of the binary input signals. Common encoding functions are signed-integer, one's-complement and
two's-complement. The corresponding FEVBDDs are shown in Figure 2. Thus, the implementation
can be described by an array of boolean functions f b The specification is given as a
word-level function fw (X Verification is then done by proving the equivalence between
an encoding of the binary output signals of the circuit, i.e. the array of boolean functions, and the
word-level function of the encoded input signals:
encode out (f b
This strategy for logic verification was first proposed by Lai, et al., using EVBDDs [12][14]. Since
FEVBDDs can describe both bit-level and word-level functions, they can be successfully applied
to logic verification.
Although all word-level operations can be represented by FEVBDDs, the space complexity of
certain operations becomes exponential so that their application is limited to small word-length.
Both EVBDD and FEVBDD representations of word-level multiplication are exponential;
FEVBDDs however offer significant savings in memory consumption over EVBDDs. As can
be seen in Figure 4 for word-level multiplication of two three-bit integers, the EVBDD contains
28 internal nodes whereas the FEVBDD representation requires only 10 nodes. In general, the
EVBDD denoting the multiplication of two n-bit integers has (n nodes. The
corresponding FEVBDD contains only n+ nodes and the ratio of EVBDD nodes
to FEVBDD nodes is n+1
. As can be seen from this ratio, the savings in the number of nodes in
the FEVBDD representation are of order n. As an example, a 16-bit multiplier requires 1,114,095
EVBDD nodes but only 65,551 FEVBDD nodes. Even if we take into account that a FEVBDD
node requires 20 bytes versus only 12 bytes per EVBDD node, the savings remain significant
(EVBDD:13.3 Mbyte, FEVBDD: 1.3 Mbyte).
As has been done for EVBDDs [14], FEVBDDs can also be extended to structured FEVBDDs
which allow the modeling of conditional expressions and vectors.
3.4 Integer Linear Programming
An algorithm FGILP for solving Integer Linear Programming (ILP) problems based on EVBDDs
has been proposed by Lai, et al. in [15]. FGILP realizes an ILP solver based on function graphs,
which uses a mixed branch-and-bound/implicit-enumeration strategy. It has been shown that this
approach can successfully compete with other branch-and-bound strategies that require the solution
of the corresponding Linear Programming problems. The latter strategy is the one most widely
applied in commercial programs.
An ILP problem can be formulated as follows:
minimize
subject to
with x i integer
Since both EVBDDs and FEVBDDs allow only binary decision variables, the encodings shown
in
Figure
2 have to be applied. A 32-bit integer, for example, can be represented by an EVBDD
or FEVBDD with nodes. Since FEVBDDs form an extension of EVBDDs we can also apply
FEVBDDs to solve ILP problems. We expect a reduction in the memory requirement for FGILP
when using FEVBDDs. This is due to the fact that different multiples of the integer variables x i
appear in equations (5) and (6). If we use EVBDDs to represent these multiples of x i , we have
to build an EVBDD for every different coefficient a ij since scalar multiplication on EVBDDs is
performed by multiplying all edge weights with the factor. If we use FEVBDDs, however, we
only have to store the FEVBDD representing x i once. Multiples of x i can be easily realized by
associating the corresponding multiplicative edge weights with dangling incoming edges leading
to x i . As an example, storing 6x, 7x and 5x requires 96 nodes if we use EVBDDs but only
nodes if we apply FEVBDDs.
3.5 Implementation of Arbitrary Precision Arithmetic
The introduction of multiplicative weights in combination with the RATIONAL rule for weight
normalizing makes it necessary to extend the value range of the edge weights from the integer
domain to the rational domain. This is done in a way such that any future expansion to other
domains such as the complex domain can be easily achieved. All operations on edge weights are
accessed through a standardized interface that invokes the specified function and then executes
the requested operation depending on the current mode. Thus, the FEVBDD code remains fully
independent of the selected domain. By changing to another mode we can easily switch from the
integer domain to the rational domain, for example. This means we can still use the fast routines
for single precision integers when necessary.
Multiple precision integers are realized as arrays of integers and the arithmetic operations are
implemented based on the algorithms for multiple precision arithmetic given by Knuth in [11].
Multiple precision fractions are implemented as arrays of two multiple precision integers where
one integer represents the numerator and the other one the denominator. It is enforced by the
package that the numerator and denominator remain relative prime and only the numerator can
be signed. This is achieved by computing the greatest common divisor (GCD) of numerator and
denominator and dividing both the numerator and denominator by the GCD. This operation is
performed whenever an input is given. Internally the data is guaranteed to remain in the normalized
form as this form is strictly enforced by all operations. Thus, a rational value is always uniquely
represented by the numerator and denominator.
The GCD can be computed very fast by Euclid's algorithm or the binary GCD algorithm
[11]. For multi-precision fractions we use the binary GCD algorithm since it works very fast for
integers of multiple word length. It only relies on subtraction and right shifting and does not
require division operations. For single word precision fractions we employ the classical version
of Euclid's algorithm since division can be executed very efficiently for single word integers. The
basic arithmetic operations for fractions are realized as follows:
ffl multiplication:
ffl division:
U
ffl addition:
3.5.1 Symbolic Operations and Finite Fields
FEVBDDs are not constrained to integer valued functions. As one can already see in the use of a
rational rule, we can easily represent functions with rational function values. Complex values are
also feasible; additionally, we can use symbolic computation. Even though the value ranges can be
extended by using rational or complex edge weights, the decision variables still have to be binary.
Thus, if we want to represent linear functions containing variables from the above value ranges,
we have to encode them binarily such as it has been done for integers. Generally this approach
leads to a means to represent any function on finite fields by FEVBDDs as it has been proposed for
ADDs [2]. In this case the FEVBDD generally represents the function
where \Phi and fi denote operations on the finite field. The ITE operator acts as a switch that either
selects the subfunction denoted by the true- or else-edge. Contrary to the ADD approach we can
exploit relationships between the subgraphs.
4 Matrix Representation and Manipulation
Matrices have been successfully represented using MTBDDs [8] [9] and ADDs [2] and implementations
of the basic matrix operations such as addition and multiplication have been given. A
popular class of matrices that can be efficiently represented by MTBDDs and EVBDDs is the class
of Walsh matrices which can be generated by a recursive rule.
4.1 Representation of Matrices
The basic idea in using function graphs to represent matrices is to encode both the row and
column position of the matrix elements using binary variables. An 8 \Theta 8 matrix, for example,
requires 3 binary variables for the rows and another 3 for the columns. Basically, we can view
the problem of representing a m \Theta n matrix as representing a function from the finite set
of all element positions to the finite set R of its elements.
The binary variables giving the row position are called row designators x 2 fx g, the
ones denoting the column position are called column designators y 2 fy g. For the
imposed variable ordering row and column designators are mixed together such that the order is
g. Because of this chosen variable ordering subtrees in the function graph
directly correspond to submatrices in the given matrix, as can be seen in Figure 6. Based on this
correspondence the pseudo-boolean function denoting the matrix M can be given easily:
fM xy
Figure
6 goes here.
Furthermore, this ordering allows matrices to be be represented compactly if they have submatrices
that are identical (MTBDDs) or can be transformed into each other by an affine transformation 4
(FEVBDDs). Since the concept of square matrices, i.e. vertical size
to keep many algorithms efficient and simple we will from now on only consider square matrices
with max(m;n). To make non-square matrices square we can easily pad them with rows
or columns filled with zeros. This does not significantly increase our memory consumption for
storing the matrix since the padded blocks are uniform and can therefore be represented by only a
few nodes.
As it has already been mentioned, MTBDDs only offer a compact and memory efficient
representation of matrices that feature identical subblocks. They require a different terminal node
for each distinct matrix element. FEVBDDs can do far better than that. The concept of FEVBDDs
allows two subblocks to be represented by the same subgraph if they differ only by an affine
transformation of their elements. We will now introduce a special class of matrices that can always
be represented by a FEVBDD of linear size. For this class of matrices the sizes of the corresponding
MTBDD, EVBDD and *BMD are likely to be exponential.
Definition 4.1 A recursively-affine matrix is recursively generated using the following rules:
1. we begin with a 1 \Theta 1 matrix M is a integer or rational constant value
2. in every recursion step a new matrix M n+1 is created based on the previous result M n such
with being arbitrary integer or rational numbers.
4 An affine transformation is a transformation of the form y ! a
Figure
7 shows the general structure of the FEVBDD that corresponds to a recursion step in building
up a recursively affine matrix. In every recursion step a structure as shown in Figure 7 is added to
the already constructed FEVBDD.
Figure
7 goes here.
As can be seen from Figure 7 we only need 3 nodes to represent a recursively-affine
matrix of size n x n. As an example of a recursively affine matrix we build in Figure 8 the FEVBDD
for the matrix M given below:
9 5
26 22 64
Figure
8 goes here.
An important class of matrices that belongs to the family of recursively-affine matrices is the set
of Walsh matrices in the Hadamard ordering [17]. These matrices can be used to compute spectral
transforms of boolean functions. They are recursively defined as follows:
Figure
9 shows both the FEVBDD and EVBDD representations of the Walsh matrix H h
3 . As can
be seen in Figure 9, the size of the FEVBDD representation is 2 \Delta n where n denotes the order of
the Walsh matrix. The size of the EVBDD representation is 4
Figure
9 goes here.
Generally speaking, employing function graphs such as MTBDDs or FEVBDDs to represent
sparse matrices offers the following advantages:
1. In comparison with normal sparse data structures, function graphs do provide a uniform
log 2 (N) access time, where N is the number of real elements being stored in the function
graph (for example, all non-zero elements of a sparse matrix)
2. Function graphs may not be able to beat sparse-matrix data structures in terms of worst space
complexity. However, recombination of isomorphic subgraphs may give a considerable
practical advantage to function graphs over other data structures. This is particularly valid
for FEVBDDs since the same subgraph can represent all the matrices that can be generated
by an affine transformation of the matrix represented by the subgraph.
4.2 Operations
Operations on matrices can be divided into two major groups. The first group comprises termwise
operations such as scalar multiplication, addition, etc. The second group is formed by matrix
multiplication, matrix transpose and matrix inversion. Termwise operations are easily implemented
based on function graphs. We can simply use apply to compute all termwise operations on matrices.
This is obviously possible since apply(op) performs the operation op on every single function value,
i.e. it works in a termwise manner. Matrix specific operations such as transposition require their
own tailored algorithms.
Matrix multiplication is clearly a non-termwise operation since it requires computing the scalar
vector product of a row of the left matrix with a column of the right matrix to get the value of a
single matrix element of the product matrix. Therefore, we will present two different recursive
procedures to perform matrix multiplication on function graphs. The first method was proposed
by McGeer [9]. This algorithm has the most direct link to the common conventional method for
matrix multiplication. In every recursion step the problem is divided into four subproblems until
a terminal case has been reached. In these steps operands are expanded with regard to a pair of
row and column designators. This expansion even takes place if the function graphs are actually
not dependent on the current pair of internal variables. By doing so there is no need for a scaling
step as is necessary in the second method. Let matrix multiplication be denoted by ? and matrix
addition by +. This method can be formally stated as:
or written in terms of matrices:4 h xy h xy
f xz f xz5 ?4 g zy g zy
zy g zy5
The computations performed in every recursion step are:
Obviously, this method requires eight calls to matrix multiply and four calls to matrix add in every
recursion step, i.e. for every internal variable pair.
The second method was proposed by Bahar [2]. Unlike the previous method it only expands
the top variable of the two operands ffx g. In
the process of matrix multiplication, the following variable order
is imposed to decide whether the top variable of f or g has to be selected as the top variable
for expansion. Depending on the character of the expansion variable var one of the following
computations is being made in every recursion step.
This approach only expands internal variables that are actually encountered in the function graphs
f and g. It requires to keep track of missing z variables in f and g since every z expansion step
corresponds to performing matrix addition. If p gives the number of omitted z expansions between
two recursion steps we have to scale the result by 2 p before returning it. When using a cache we
always store the unscaled results and scale the entry accordingly when reading the cache.
Another method was proposed by Clarke [9]. Its basic idea is to take all the products first and
then compute all the sums.
For our matrix package we have implemented the second method which appears to be superior
to the other two [2]. We implemented two different versions of this method. Version 1 passes the
value of the edge weights down with every recursion step of matrix multiply and is of O(kfk \Delta kgk)
complexity. As we have done for multiplication of two FEVBDDs we suggest a second version for
function graphs with a high ratio of reconverging branches (e.g. for recursively-affine matrices) as
follows.
[f
The operations rowadd and coladd which generate matrices such that
a
i a 0i
a
i a in
only have complexity O(jf j). This second version requires only O(jf j \Delta jgj) calls to matrix multiply
but every recursive call to matrix multiply also requires three calls to matrix add. It improves
the cache efficiency of matrix multiplication considerably, if both operands are represented by
FEVBDDs with a high ratio of reconverging branches. This outweighs the added overhead of three
calls to matrix add. If this is not the case, it is better to use the original approach since it does not
require the additional overhead.
Matrix transposition is performed by exchanging the roles of column and row designators
belonging to the same expansion level. To maintain the imposed variable ordering the nodes in
the function graph have to be exchanged and it is not sufficient to just interpret row as column
designators and vice versa. Transposition can be done in O(jf
Matrix inversion is done by performing Gaussian elimination on the original matrix and the
identity matrix at the same time. In other words we solve the system of linear equations A ?
with the use of pivoting and row transformations. The steps required by Gaussian elimination
consist of [19]:
ffl selecting a partial pivot in every step j such that ja pj
ffl normalizing the selected row by multiplying the row by the inverse of the pivot 1
ffl swapping rows j and p according to above pivot selection
subtracting multiples of the pivot row j from all rows i ? j such that a
All of the above operations except for row swapping can be implemented efficiently in time O(jAj)
or O(jAj \Delta jRj) where R denotes the FEVBDD representing the pivot row. Row swapping is
performed by matrix multiplication of matrix A with a permutation matrix P and therefore is of
complexity Permutation matrices can be obtained by
denotes a permutation matrix swapping rows i and j, I represents the identity matrix and M ij
designates a matrix
rs
rs
In general, partial pivoting is done in order to improve the numerical accuracy of Gaussian elimi-
nation. Since our implementation relies on fractions of arbitrary precision we always use the exact
values and numerical stability is not an issue. In order to avoid unnecessary row swapping we only
perform the partial pivoting if it holds in step j that a
In addition to the basic matrix operations, fast search operations for specific matrix elements
have been implemented. Algorithms for searching both the value and position of the minimal,
maximal or absolute maximal element in a given matrix were developed. This approach makes use
of the min and max fields that can be associated with every node. The computational complexity
for finding both the value and position of the minimal or maximal element in a n \Theta n matrix is
O(log 2 (dne)). We will now explain the basic idea behind the algorithms in the case of searching
for the maximal element. Given a FEVBDD node f and its two successors f t
and f e
we can easily
determine which edge leads to the maximal element. Based on the values of the max and min
fields of f t
and f e
we simply recompute the max field of f and select the successor that originally
generated the max field of f. If only the value of the maximal or minimal element is of interest,
it can be computed directly from the min and max field of the top node f without any further
computation.
value EVBDD FEVBDD
range GCD RATIONAL
integer 12 bytes 20 bytes 24 bytes
fractions
Table
5: Memory requirement per node
4.3 Experimental Results
We have applied our FEVBDD based matrix package to the problem of solving the Chapman-Kolmogorov
equations [18] that arise when computing the global state probabilities of FSMs.
Though the memory consumption of our inversion routine is relatively low (8M for inverting a
64x64 matrix), the run time is very high. This is due to several factors. First, the algorithm for Gaussian
elimination is purely sequential whereas FEVBDDs are recursively defined. Consequently,
computation caching for matrix inversion does not exist. A recursive algorithm for matrix inversion
will perform much better on FEVBDDs. Secondly, when using fractions of arbitrary length all
operations need substantially more time than is necessary for ordinary integers. We
therefore use the obtained inverses primarily as examples of real life non-sparse matrices that can
be represented compactly using FEVBDDs and compare them with their EVBDD representations.
As can be seen from the table below using FEVBDDs gives savings of up to 50% compared to
EVBDDs in the number of nodes required to represent the non-sparse inverse. Of course, one has
to consider that the storage requirement per node is higher for FEVBDDs than for EVBDDs. An
overview of the memory usage per node in the various modes available for EVBDDs and FEVB-
DDs is given in table 5. We assume that every EVBDD node consists of an integer or fractional
edge value and two pointers to the children. Every FEVBDD node consists of two fractional edge
weights and two pointers in the RATIONAL mode or three integer edge weights and two pointers
in the GCD mode.
The total memory consumption for storing the matrices using EVBDDs and FEVBDDs is shown
in tables 6 and 7, respectively. The given memory usage is based on EVBDDs and FEVBDDs using
fractions. The FEVBDDs have been generated using the RATIONAL rule. In the case of CK-
Equations we have to use fractions for the edge weights since the matrix elements are fractions. As
can be seen from the tables, FEVBDDs do better for the inverses but lose for the original matrices
in terms of total memory consumption. This is due to the fact that the original matrices are sparse
whereas the inverses are non-sparse. In the case of sparse matrices the additional properties of
FEVBDDs are not exploited so that EVBDDs and FEVBDDs perform similarly in the number of
nodes. FEVBDDs, however, lose in terms of memory requirement because of the higher cost per
FEVBDD node. Since EVBDDs do at least as good as MTBDDs this also gives an idea of the
performance of FEVBDDs compared to MTBDDs.
5 Conclusion
We showed that by associating both an additive and a multiplicative weight with the edges of
an Edge-Valued Binary Decision Diagram, EVBDDs could successfully be extended to Factored
Edge-Valued Binary Decision Diagrams. The new data structure preserves the canonical property
of the EVBDD and allows efficient caching of operational results. All properties that have been
defined for EVBDDs could be adapted to FEVBDDs. The additive property was extended to the
affine property. It was shown that FEVBDDs provide a more compact representation of arithmetic
functions than EVBDDs. Additionally, the complexity of certain operations could be reduced
significantly. We showed that FEVBDDs representing boolean functions allow us to incorporate
the concept of complement edges that has originally been proposed for OBDDs. Furthermore,
we showed that the EVBDD based Integer Linear Programming solver FGILP benefits from using
FEVBDDs instead of EVBDDs.
In combination with the FEVBDD package we also implemented an arithmetic package which
supplies arithmetic operations on both integers and fractions of arbitrary precision. A complete
matrix package based on FEVBDDs was introduced. We applied the package to solving the
Chapman-Kolmogorov equations. The experimental results show that in the majority of cases
FEVBDDs win over the corresponding EVBDD representation of the matrices in terms of number
of nodes and memory consumption.
Acknowledgement
The authors like to thank Y.-T. Lai for supplying them with the EVBDD package and many helpful
discussions.
--R
"Binary decision diagrams,"
"Al- gebraic Decision Diagrams and their Applications"
"On the relation between BDDs and FDDs"
"Efficient Implementation of a BDD Package,"
"Graph-Based Algorithms for Boolean Function Manipulation,"
"Symbolic Boolean Manipulation with Ordered Binary-Decision Diagrams,"
"Verification of Arithmetic Functions with Binary Moment Diagrams,"
"Spectral transforms for large Boolean functions with application to technology mapping"
"Multi-terminal binary decision diagrams: an efficient data structure for matrix representation,"
"Efficient Representation and Manipulation of Switching Functions Based on Ordered Kronecker Functional Decisiond Diagrams"
"The Art of Computer Programming Volume 2: Seminumerical Algorithms"
"Edge-valued binary decision diagrams for multi-level hierarchical verification"
Vrudhula, "EVBDD-based algorithms for integer linear programming, spectral transformation and function decomposition"
"Edge-valued binary decision diagrams"
Vrudhula, "FGILP: An integer linear program solver based on function graphs"
"Representation of switching circuits by binary-decision-programs"
"Fast Transforms. Algorithms, Analyses, Applications"
"A First Course in Probability"
"Introduction to Numerical Analysis"
"Factored Edge-Valued Binary Decision Diagrams and their Application to Matrix Representation and Manipulation"
--TR
Graph-based algorithms for Boolean function manipulation
Efficient implementation of a BDD package
Symbolic Boolean manipulation with ordered binary-decision diagrams
Edge-valued binary decision diagrams for multi-level hierarchical verification
Spectral transforms for large boolean functions with applications to technology mapping
Algebraic decision diagrams and their applications
The art of computer programming, volume 2 (3rd ed.)
Fast Transforms
Formal Verification Using Edge-Valued Binary Decision Diagrams
Verification of Arithmetic Functions with Binary Moment Diagrams
--CTR
Rolf Drechsler , Bernd Becker , Stefan Ruppertz, K*BMDs: A New Data Structure for Verification, Proceedings of the 1996 European conference on Design and Test, p.2, March 11-14, 1996
Rolf Drechsler , Wolfgang Gnther , Stefan Hreth, Minimization of word-level decision diagrams, Integration, the VLSI Journal, v.33 n.1, p.39-70, December 2002 | logic verification;Ordered Binary Decision Diagrams;integer linear programming;pseudo-boolean functions;matrix operations;affine property |
607573 | Hierarchical Reachability Graph Generation for Petri Nets. | Reachability analysis is the most general approach to the analysis of Petri nets. Due to the well-known problem of state-space explosion, generation of the reachability set and reachability graph with the known approaches often becomes intractable even for moderately sized nets. This paper presents a new method to generate and represent the reachability set and reachability graph of large Petri nets in a compositional and hierarchical way. The representation is related to previously known Kronecker-based representations, and contains the complete information about reachable markings and possible transitions. Consequently, all properties that it is possible for the reachability graph to decide can be decided using the Kronecker representation. The central idea of the new technique is a divide and conquer approach. Based on net-level results, nets are decomposed, and reachability graphs for parts are generated and combined. The whole approach can be realized in a completely automated way and has been integrated in a Petri net-based analysis tool. | Introduction
Petri Nets (PNs) are an established formalism to describe and analyze dynamic systems. Among the
large number of available analysis techniques, the generation of the set of all reachable markings and all
possible transitions is the most general approach, which is theoretically applicable for every bounded net.
The resulting graph is denoted as the reachability graph (RG) or occurrence graph. The set of reachable
markings is denoted as the reachability set (RS). Reachable markings of the PN build the vertices of
the graph and transitions describe the edges. Edges may be labeled with the corresponding transition
identifier from the PN description. The RG contains the full information about the dynamic behavior
of the PN and can be easily analyzed to gain results about the functional behavior as required for the
verification of system properties. RGs are generated by an algorithm computing all successor markings
for discovered markings, starting with the initial marking of the net. This approach is conceptually simple
and is integrated in most software tools developed for the analysis of PNs. In practice, unfortunately, the
size of RGs often grows exponentially with the size of the PN in terms of places and tokens. Hence RG
generation is usable only for relatively small nets, much smaller than most practically relevant examples
are.
Consequently, a large number of approaches has been published to increase the size of RGs which can
be handled. A straightforward idea is to increase the available computing power and memory to increase
the size of RGs. This is done by using powerful parallel or distributed computer architectures. Examples
for this approach can be found in [1, 9] describing implementations on various parallel architectures and
[14, 25], where workstation clusters are used for RG generation. These approaches describe RG exploration
for Generalized Stochastic Petri Nets (GSPNs), however, they apply for RG exploration of PNs
as well. The general problem of parallel/distributed state space generation is still, that an exponentially
growing problem is attacked by increasing the available resources at most linearly. Additionally, the
problem of an efficient parallelization of RG generation arises. Efficient realization of the RG generation
algorithm in a distributed way is non-trivial since the different distributed tasks are dependent and require
synchronization introducing additional overhead. In particular, the speedup that can be reached by
a parallel implementation is model dependent which makes the problem of an efficient general purpose
realization of parallel RG generation even harder.
An alternative to handle large RGs is to reduce their size without loosing relevant information. This
idea can be exploited at two different levels. First, the net can be simplified by reducing the number
of places and transitions. The corresponding approaches are denoted as reduction rules, published for
uncolored PNs in [4] and subsequently for colored PNs (CPNs) in [16]. Reduction rules are defined with
respect to the properties of interest. Thus, first properties need to be defined and then reduction rules
which preserve these properties can be introduced, which yields a set of predefined rules for a set of
predefined properties as in [4, 16]. The main drawback of reduction rules is that their applicability is
restricted to relatively specific structures. Consequently, the gain obtained by reduction rules is for most
nets relatively small and reduction rules can most times only be used as an a priori step which does not
solve the problem of large RGs. The second approach to reduce the size of RGs is to perform the reduction
at the level of reachable markings. Such an approach requires a compositional state space generation
such that generation and reduction can be interleaved. Different techniques exploiting this idea exist.
The usual way is to define the complete PN as a collection of interacting components. Usually component
RGs are much smaller than the complete RG. Thus, RGs for the components are generated efficiently
and are reduced according to some reduction rules which preserve relevant properties. Subsequently,
reduced component state spaces are composed. In most approaches in this context, components interact
via synchronized transitions. In [13], an approach for CPNs is introduced where RGs of components
are generated in parallel by considering only local transitions. Additionally, a synchronization graph
describing synchronized transitions is defined. By interleaving local and synchronized transition firing
the complete RG can be generated or properties holding on the complete RG can be proved. Similarly
in [29], complete component RGs are generated first, which are finally combined and reduced such that
important properties like deadlocks or boundedness are preserved. In [34], a compositional analysis
method for place-bordered subnets is presented. It is also based on the interleaving of composition and
behavior preserving reduction. In [23], a different approach for components composed via synchronized
transitions is proposed. The approach introduces a compact representation of the complete RG and
an efficient way to characterize RS. The idea is that the incidence matrix characterizing RG can be
composed via Kronecker operations from incidence matrices of component RGs and RS is a subset of
the cross product of component reachability sets. Knowing RS and the RGs of the components, RS
and RG of the CPN are completely characterized. In [5], an approach for hierarchical RG generation is
proposed for hierarchically structured CPNs. Similar to the previous approach the RG is described using
component RGs by composing incidence matrices via Kronecker operations. The approach requires that
the complete net is structured in an appropriate way. The disadvantage of all these methods for efficient
RG generation is that the component structure has to be defined by the modeler and all methods are very
sensitive for the component structure. Techniques which reduce the size of RG by behavior preserving
reduction depend on the required results. If relatively detailed results are required, most reduction fail
or have only a small effect on the size of RG.
Other methods for efficient RG computation include the stubborn set method [35], which eliminates
unnecessary interleavings from RG during generation, and the exploitation of symmetries to reduce RG
[10, 22]. In both methods some additional computation is necessary during RG generation and the CPN
has to observe several structural conditions that the methods can be used in an efficient way (i.e., to
exploit symmetries, the CPN has to contain symmetric parts, otherwise the reduction has no effect). The
idea of symmetry exploitation combined with a compact representation of RG by composing component
RGs is described in [17] for quantitative analysis. Techniques based on ordered binary decision diagrams
(OBDDs) rely on symmetries as well, in [30] Pastor et. al. describe OBDD-algorithms mainly for 1-
bounded PNs.
Apart from techniques to characterize the complete or reduced RG in an efficient and compact way,
several approaches to derive results without generating RS and RG exist. Usually these approaches yield
only partial results in the sense that we can not formally prove results, we can only disprove some by
finding failure states. These techniques include simulation and invariant analysis [22].
In this paper, we introduce an approach which is related to the work presented in [23] and [5]. RS and
RG are handled in a compositional way, which allows the representation and generation of large RSs/RGs.
In contrast to other known methods performing compositional analysis, our approach represents the
complete RG. Behavior preserving reduction is not applied. Consequently, arbitrary properties can be
be checked on the resulting RG. However, it is also possible to combine the approach with behavior
preserving reduction, although this is not considered in this paper. The proposed technique can be
completely automated for a large class of PNs including all PNs which are covered by P-invariants. We
present the approach here for uncolored PNs to simplify notation. Keeping in mind that every CPN with
finite color sets can be unfolded to a uncolored PN [22], it obvious that the approach can be applied for
a large class of CPNs too.
The structure of the paper is as follows. In Sect. 2, the PN class is defined, reachability and invariant
analysis are introduced. Sect 3 describes the definition of regions which divide a PN into subnets. In Sect.
4 an abstraction operator is described which allows us to abstract from details in the net description to
reduce the size of RS. Afterwards, Sect. 5, introduces a hierarchical and compositional representation of
RS and RG. Then, different analysis approaches are proposed which exploit the hierarchical representation
of RS and RG. Sect. 7 contains a non-trivial example to clarify the advantages of the new approach
compared to conventional RG generation.
Basic Definitions and Known Results
We assume that the reader is familiar with PNs and the related basic concepts. For details about these
fundamentals we refer to [21, 22, 28].
net is a 5 tuple
is a finite and non-empty set of places,
is a finite and non-empty set of transitions
are the backward and forward incidence functions, and
IN is the initial marking.
The initial marking is a special case of a marking marking M can be interpreted as
an integer (row) vector which includes per place p one element which describes the number of tokens on
place p.
gives the set of input places for a transition t, and
gives the set of output places. Analogously we define
The notion can directly be extended to sets. In the sequel we consider connected nets, i.e. each place,
transition has at least one incoming and one outgoing arc. Transition t 2 T is enabled in marking M ,
transition enabled in M can fire, changing the marking of any p 2 P
to marking M 0 This will be indicated by M [t?M 0 , M [t? denotes that
t is enabled in M and M [? describes the set of enabled transitions in M . Considering firing sequences
yields to the definition of the language
which is the set of all reachable markings for PN . The reachability graph RG(PN) contains
nodes for every M 2 RS(PN) and an arc necessary, arcs can
be labeled with the corresponding transition and/or a transition rate as for stochastic Petri nets (SPNs).
SPNs [27] extend the above class slightly by the association of exponentially distributed firing times with
transitions. We define a function is the set of non-negative numbers.
W (t; M) is the rate of an exponential distribution associated with transition t in marking M . We assume
that W (t; M) ? 0:0 if t is enabled in M . The RG of a SPN results from the RG of the corresponding PN
by adding transition rates to the edges. RS is identical in both cases. SPNs can be used for performance
analysis by analyzing the continuous time Markov chain described by the SPN [27].
The incidence matrix C is a matrix which contains for each place p 2 P a row and for each transition
a column such that C(p; It can be used to define net-level properties of a
net PN .
covered by positive
-invariants, if for each place
A vector y covered by positive T -invariants, if
for each transition t 2 T a T -invariant y - 0 with y(t) ? 0 exists.
An algorithm for computation of invariants is given in [26], although its time complexity is exponential
for a worst case, usually invariant computation is much easier than generation of RS and RG. Incidence
matrix and invariants ensure certain properties, however they do not completely characterize RS. The
following theorem summarizes some classical results.
Theorem 1 For a PN with a set of P -invariants X and a set of T -invariants Y the following results
hold.
ffl If marking M 0 is reachable from marking M , then an integer vector z exists such that M
M+Cz T . This implies that for every M 2 RS(PN) an integer vector z M with
exists.
ffl If x; x Analogously for Y .
ffl For each reachable marking M the relation Mx has to hold for all x 2 X.
ffl If PN is covered by positive P -invariants, then it is bounded.
ffl If PN is bounded and live, then it is covered by positive T -invariants.
Proof: Proofs can be found in standard books on PNs. ffi
Although invariants offer some insight in the dynamic behavior of the modeled system, they are most
times not sufficient to obtain the required results. Thus, RS and RG have to be generated for a detailed
analysis. Usually, first RS is generated and the arcs of RG are computed in a second step. The following
algorithm computes RS for a PN , it terminates if RS contains a finite number of markings 1 .
generate RS (PN)
while (U 6= ;) do
remove M from U ;
for all t 2 M [? do
od
od
Set U contains markings for which successors have not been generated, whereas RS contains all
generated markings. For U a simple data structure like a queue or stack is sufficient since elements
only have to be added and removed. For RS a data structure allowing an efficient membership test is
necessary. Consequently, RS can be realized using an appropriate hash function or a tree like structure
allowing a membership test with an effort logarithmic in the number of elements. The problem with
hashing are possible collisions. It is usually very hard to avoid collisions for general PNs. Therefore,
most software tools use binary trees for the generation of RS.
We briefly analyze the effort required for the generation of RG, when a binary tree is used to store
RS. Let n be the number of markings in RS and let n \Delta d the number of arcs in RG. Hence the mean
slightly extended version catches infinite RS, see coverability graph construction in PN literature.
number of successors per marking is d. The time required for the generation of RS is in the order of
d
which is approximately d \Delta n \Delta log 2 (n). Additionally, memory limitations have
to be taken into account. Even if more sophisticated data structures are used for RS, the number of
markings which can be generated on a standard workstation lies between 150; 000 and 1; 500; 000. For
PNs including a large number of places, the value can be much smaller. In case of certain symmetries,
ordered binary decision diagrams (OBDDs) are able to handle extremely large sizes of RS and RG (see
[8] among others). Pastor et al [30] describe how OBDD-techniques can be applied for PNs. However,
the use of OBDDs requires the existence of symmetries to yield a compact representation.
After RS has been generated, the arcs in RG are generated in a second step. RG can be represented
by a n \Theta n incidence matrix Q. If transition identities and rates are not relevant, Q can be stored as a
Boolean matrix. Otherwise Q has to include the required information.
Autonomous Regions in PNS
In this section, we define parts of a PN which will be latter substituted by a less detailed representation
in an abstraction operation. These parts, which are denoted as regions, have a place border at the input
and a transition output border. This is different from other hierarchical constructs in the PN area [22],
where places or transitions are refined. However, the definition is natural from a behaviorally oriented
point of view (see also [11]), because a region describes a part acting for its own. Communication is
performed by receiving tokens from the environment (place bordered input) and sending tokens to the
environment (transition bordered output).
of a set of transitions of
r
r are the corresponding functions of PN
restricted to P r and T r , respectively. PN r defines a region iff the input bags of transitions in T r and
in T are disjoint, i.e., P r . For a region PN r , the set of output transitions T out
consists of all t 2 T r such that I
Analogously the set of input transition is
T in
A region describes an autonomous part of a PN, which will be used to define a hierarchical structure.
A region is minimal if it contains no region as a proper subset 2 . The concept is illustrated by the following
example, which will serve as a running example to accomplish the line of argumentation.
Example 1 We consider a producer/consumer model where a producer A successively fills two buffers B1
and B2. Fig. 1 shows the corresponding PN, where places fp1; p2; p3g describe the state of the producer.
Places fp5; p7g are buffer places, whose capacity is limited by places fp4; p6g. Buffer B2 is always filled
with two items/tokens at once, while B1 obtains single tokens. The model contains two consumers of
equal behavior. A consumer non-deterministically takes tokens from each buffer, but the first buffer is
only considered if two consumers are willing to consume. Places fp8; p9; p10g give the state of both
consumers. The model is clearly artificial and just intended to illustrate our concepts. Minimal regions
in this model are shown in Fig. 1 by shaded polygons.
Proposition 1 For a PN with regions N r1 ; N
1. minimal regions are disjoint, i.e. if N r1 , N r2 are minimal and N r1 6= N r2 then P
2. minimal regions define a partition, i.e. for each exactly one minimal region
ri with ri ),
3. regions are closed under union, i.e. if N r1 , N r2 are regions then T r1 [T r2 defines a subnet N r which
is a region.
Minimal regions coincide with the equivalence relation of the conflict relation [33].
p3
p6
Producer A Buffer B1 Consumer C22Buffer B2
Figure
1: Producer/Consumer Model and its partition into minimal regions
Proof. Straightforward for nets, where each place has at least one outgoing arc. If this is not the case,
one need to define an additional region, which consists of places with empty set of output transitions.
Minimal regions can be generated using the simple algorithm shown below.
while UT 6= ; do
remove t 0 from UT ;
while do
remove t with fflt " fflT i 6= ; from UT ;
od
od
Once the algorithm terminated, sets T i contains transitions which are used to define regions according
to Def. 3 (i.e., (fflT i
4 Generation of Abstract Views
Let PN be a Petri net. We want to enhance information associated to a place by the following kind of
vector:
Definition 4 A p-vector v p for a place p 2 P is a vector v p 2 ZZ n+m with index
Entries in v p are referenced by v p (x) for x
obtain lower index values.
An aggregation function AG : ZZ n+m \ThetaIN n \Gamma! IN for p-vectors and markings is defined as AG(v
A linear combination LC : ZZ n+m \Theta ZZ n+m \Theta T \Gamma! ZZ n+m of p-vectors v a , v b is defined for a t 2 T ,
lcm(jv a (t)j; jv b (t)j)=jv a (t)j and c
lcm gives the least common multiple of two integers and gcd of an integer vector is the greatest common
divisor of the elements. Note that v c
We inductively define extended nets which result from a sequence of net transformations based on
linear combinations:
be a sequence of transitions of a PN , and ffl denote the empty sequence. An
extended net is a tuple (N inductively defined as follows: (N is an extended net where
Let for better readability be an abbreviation for the resulting
new vectors of a linear combination w.r.t. transition t, and Used = fv a
A s g denote the vectors used to generate New for an extended net (N s
st st ; A st ) is an extended net if (N s is an extended net, s 2 (Tnftg) and
st
st
st
st
st
s
I
st
I
s
st
Note that c a ; c b - 0 by construction.
We additionally distinguish ordinary places P ord
those generated in the extension sequence
denoted by P agg
s .
The definition separates available vectors A s from the total set of vectors V s in order to ensure that
vectors are used in at most one step of a sequence s. This restriction is made in order to focus on those
linear combinations which are relevant in the following. The net transformation basically mimicks the
computation of P-invariants according to [26]. (N s contains all transitions t 2 T
exactly once describes an extended net where each P -invariant is realized by a place p 2 P agg . For
an aggregated place p representing a P -invariant, the marking is constant (i.e., I
all We can interpret the marking of an aggregated place as a macro marking which includes
an abstract view of the detailed marking. Since the complete net does not exchange tokens with the
environment, macro markings representing P-invariants are invariant. However, if sequence s contains
only a subset of transitions, then the marking of an aggregated place represents possibly only a macro
marking for a subset of places belonging to a P -invariant. In this case, the marking of the aggregated
place changes whenever tokens are added to or removed from the partial P-invariant it represents. Since
the net transformation follows the computation of P-invariants, the effort is limited to the effort for
computing P-invariants. Often the effort is much smaller, since only a subset of transitions is used in s.
Example 2 Before we go into further details we come back on our running example: it contains P-
invariants (described as formal sums):
and a T-invariant (2 Fig. 2 shows the extended net for sequence
are hatched and arcs are dotted to indicate the differences from N ffl . The minimal
regions which are connected via t1, t6, and t7 have been merged; shaded polygons denote the new and
larger regions in N s . p-vectors and corresponding linear combinations are given in the tabular below:
p3
p6
Figure
2: Extended net
Note that the definition of extended net defines arc weights for new arcs connected to new places as
weighted arcs of the original net. This allows to consider bidirectional arcs (self-loops) appropriately, as
the arcs connected to places p12 and p13 in Fig. 2 illustrate.
A place represents a set of places in P ord , and
gives an
aggregated marking for the marking of this set. Aggregating information is the crucial point in deducing
a hierarchy. Before we describe a way to split an extended net into a high level net and a set of low level
nets to obtain the desired hierarchy, we formalize the aggregation and subsequently consider reachability
and language invariance of the net extension.
Proof. We consider an induction over transition sequences s, initially all
trivially fulfill the lemma
and P agg
;. For the induction step we consider a st which results from v
where v a ; v b 2 New at step s (the case p 2 P s is trivial in N st since for these places M 0 , I \Gamma and
I+ remain unchanged). We further consider an induction over firing sequences oe, initially M 0
holds by definition. For the induction step, we consider M where the
induction assumption ensures
we have to show M 0
after extending oe by a transition -.
according to the definition of successor marking and extended net. By induction assumption
such that we can replace M(p) in the equation above. Observe that I \Gamma ,I remain invariant for z 2 P ffl ,
such that we obtain:
st
st (z; -))
ut
The way in which places are added to a net in an extension sequence, ensures that the reachability
set and the language remain the same.
Lemma 2 For all s 2 T for which an extended net (N s
Proof. by induction over sequences s, initially trivially fulfilled. For the induction step we start
with the special case directly implies equality. This case can occur e.g. if A
or if
6 9v a For the general case we give a proof by contradiction:
case
st [oe ? M st but oe not possible in PN s . Hence 9t 0 2 oe which is not enabled in PN s . i.e.
there are less tokens in a place p 2 P s . Contradiction to definition of extended nets, because M 0 ; I
changes only with respect to new places.
case
but oe not possible in PN st . Hence 9t 0 2 oe which is not enabled in PN st . i.e.
there are less tokens in a place st nP s . According to Lemma 1,
st
z;t and obviously for each z 2
z;t , we obtain a contradiction.
In summary equality holds. Equivalence of languages follows by the same line of argumentation. ut
A direct consequence of Lemma 2 is that invariants remain valid, T-invariants due to language equivalence
and P-invariants due to additivity of invariants, cf. Theorem 1. Furthermore we can decide for
places like p13 in our example, where I transitions t, whether a given initial
marking M 0 (p13) ensures that a transition is dead due to M 0 or whether the place
can safely by omitted since M(p13) - I \Gamma (p13; t) for all M 2 RS. If the former is the case, it is clear that
the net is not live.
So far we have described a way to add places to a net without changing its reachability set or
language. The notion of an extended net is only a formal prop to introduce a hierarchical net; it simplifies
argumentation why a hierarchical net indeed includes the reachability set or language of its N ffl . The key
issue for a hierarchy is abstraction: at a higher level the state of a subsystem must be represented in less
detail than at a lower level. We use aggregated places to obtain an aggregated state representation and
the notion of subsystem is build on the concept of region.
Let R(N ffl ) denote the set of minimal regions w.r.t. to an extended net When we extend
this net for a transition t 2 T out
r of a region N r 2 R(N ffl ), then the new places connect N r with regions
that contain tffl. Consequently we merge all these regions with N r , which yields a new region N 0
r according
to Prop. 1. Since we start from a partition into regions, the resulting set of regions is a partition again,
but this partition is less fine. t becomes internal in N 0
r , because fflt [
r , and new places give an
aggregated description of the internal behavior of N 0
r w.r.t. transition t. Following this procedure over a
sequence s of transitions yields (N s partition into regions, where some regions have internal
transitions and aggregated places. In this situation a decomposition of an extended net into a high level
net using the aggregated description and a set of low level nets resulting from regions with an internal
behavior gives the two-level hierarchy we aim for.
More formally, a high level net for a given extended net (N s results from a projection with
respect to A s .
be an extended net, its corresponding high level net
and I are the corresponding projections of I \Gamma , I
Example 3 Fig. 3 shows the high level net for the extended net of our running example in Fig. 2.2p4
p6
t5222
Figure
3: High level net for
Proof. uses previous lemma about equality for N ffl and N s , and that HN is deduced by omitting places
(releases enabling conditions thus increases RS) and by omitting transitions, which are isolated (since
all elements in the pre- and postset of such transitions are used in linear combinations and thus not
contained in A s anymore). Isolated transitions have no effect on RS. ut
P-invariants of HN are linear combinations of P-invariants of N ffl , hence if N ffl is covered by P-invariants,
so is HN . Consequently, we can guarantee finiteness of RS(HN) if N ffl is covered by P-invariants.
Lemma 3 states that the HN indeed considers a more abstract net, such that the detailed net can
only behave in a way which is consistent with this abstraction/aggregation. If a region in the extended
net contains all places of the set Used for a transition t, it shows an internal behavior, which allows to
define a non-trivial isolated low-level net:
Definition 7 A low level net for a region r in an extended net (N s
with respect to N ffl ).
I
I
I L+
I H+ (p)(t) if
I
If LN and the corresponding region r H in HN do not differ in their transitions, LN is trivial and
can be neglected. Otherwise LN is non-trivial.
Observe that both, HN and LNs do only pick aggregated places from A s , elements in V s nA s are neglected,
because their information is sufficiently represented by the linear combinations they contributed to.
A LN for a region r and its HN share places transitions T out
r . These common net
elements form an interface for the HN to communicate with the LN in an asynchronous manner. The
HN puts tokens via transitions t 2 T in
sends signals to the LN - and experiences output
behavior through firing of t 2 T out
r . The notion of hierarchy is justified, since the HN abstracts from the
details inside the LN, tokens on P describe a so-called macro marking of the LN, and T out
r
represent the aggregated behaviour of the LN. However an LN can be merged with the HN to observe
the detailed behavior. Formally we describe this as an extended LN (EN) as
are defined analogously. An EN is an ordinary
PN, with RS(EN) and L(EN) defined as before. The relationship between HN and LN is not symmetric,
because the HN has an aggregated description of the LN by P
r , but not vice versa.
As seen from the LN, a HN provides an environment, with whom the LN interacts in an asynchronous
manner, but the LN lacks the aggregated description of the HN behavior. Hence the reachability of a
LN cannot be seen independently of a HN, such that the reachability set RS(LN) of the LN for a given
environment HN needs the notion of EN and results from the projection of RS(EN)jLN . In the following
section we will use these concepts to give a hierarchical/compositional representation of RS(N) based on
RS(HN) and RS(i)j i for each LN i.
Example 4 Figs. 4,5 show the extended low level nets for the two non-trivial low level nets - indicated
by shaded polygons - of our running example. They result from the producer part, where transition t1
becomes internal, and from the consumer part, where transitions t6 and t7 become internal. For the
consumer region, only p13 becomes part of A t1t6t7 because v12 has been used for its construction. The
region with transition t3 has no internal transition.
Obviously selection of s has a massive impact on the resulting hierarchical net description. Consideration
of "optimal" sequences is subject to further investigations. At this state we can only formulate
goals and rules of thumb to follow:
1. it is clear, that a non-trivial LN results from merging adjacent regions, i.e. if for N s with regions
in s, then all t
should become element
of s.
p6
t722
Figure
4: Low level net and extended low level net for the consumer region
p3
p3
p6
t522
Figure
5: Low level net and extended low level net for the producer region
2. deriving a hierarchy is a divide and conquer strategy, so a sequence should yield a set of non-trivial
LNs, such that complexity is equally distributed over this set. This means that only those regions
are merged, whose result will not cover a majority of the net.
3. aggregated places introduce overhead, especially building all linear combinations can impose an
unacceptable increase of net elements. This is the reason for an exponential worst case time
complexity of invariant computation. In our case we have the freedom to select transitions and
to consider only a subset of transitions. Hence those transitions t are preferred where
w.r.t. N s is relatively small.
In our implementation of the proposed approach, we have integrated these heuristic rules to generate
appropriate transition sequences. First experiences with several examples (e.g., the example presented
in Sect. 7) are very encouraging. The program automatically chooses a sequence of transitions which
partions complex nets into non-trivial parts and a non-trivial HPN.
Hierarchical Representations of RS and RG
Dividing PN into HN and LNs allows us to generate and represent RS(PN) and RG(PN) in a space and
time efficient way. For notational convenience we assume that PN is decomposed into one HN and J LNs,
which are consecutively numbered 1 through J . Furthermore we assume in the sequel that all reachability
sets for the HN and the extended LNs are finite. Consequently, reachability sets are isomorphic to finite
sets of consecutive integers. Thus let x corresponds to the x-th
marking in RS(HN ), and we use x and M x interchangeably. We can represent RG(HN) by a nH \Theta nH
any two
markings in RG(HN) at most one transition exists. If more than one transition between M x and M y
exists, describes a list of transition indexes. We use for generality the notation t 2 Q H (x; y)
for all t that fulfill M x
The reachability set RS(j) of a LN j depends on the environment given by HN. Hence, we consider
the EN e that corresponds to j and define RS(j) as a projection of RS(e) on the places of j. Since any
LN j and the HN share some places as a projection of RS(j) onto places
from . Markings from g RS(j) are macro markings and allow to partition RS(j). Macro markings
are useful for RS(j) generation since full details of an HN are irrelevant at the LN level, one can redefine
transitions of T in
, such that their firing is marking dependent with respect to RS(HN)j P j "P h ' g
RS(j).
So EN is only a formal prop to obtain a clear notion of RS(j), in practice however, computation of
RS(j) can be performed more efficiently by using only macro marking dependent transitions of T in
since
transitions local to HN are ignored. The resulting set RS(j) might contain markings which are not in
RS(PN ), but these can be eliminated in a subsequent step, cf. Sect. 6.
Let g
and denote by RS(j; ~ x) with ~ x
1g the set of markings from
which belong to marking ~
x in g RS(j). Markings from a set RS(j; ~ x) are indistinguishable in the HN,
i.e., the marking of the places from is the same. Since reachability sets are assumed to be finite,
each set RS(j; ~ x) can be represented by a set of integers 1g. A marking M x 2 RS(HN)
uniquely determines the macro markings for all LNs. We denote by x j the macro marking of LN j
belonging to marking x and obtain M x
Markings of PN s can be characterized using J +1-dimensional integer vectors
Jg. xH describes a marking from RS(HN) and
a marking from RS(j; x j
H describes the macro marking of LNj when the marking of HN
equals xH . This implies M xH j P H "P . Since the previous relation holds, each integer vector
of the previously introduced form determines a marking of the extended net. We define a hierarchically
generated reachability set
Observe that the number of markings in RS H (PN s ) equals
Y
Lemma 4 The hierarchically generated reachability set and the reachability set are related as follows.
Proof. The previous lemmas imply that RS(PN s )j
Jg, and for each M 2 RS(PN s
By construction of the hierarchically generated reachability set M 2 RS H (PN s )
follows. The second relation follows since
Equation (8) describes a very compact way to represent huge reachability sets by composing a few
smaller sets. Observe that a few reachability sets with some hundred of markings are enough to describe
sets with several millions or billions of markings. To keep the representation compact, it has to be assured
that reachability sets of the LNs are roughly of the same size. Of course, this is hard to assure a priori,
but it is possible to generate regions in a way that they include a similar number of places and transitions
which is often sufficient to yield reachability sets of a similar size for the different regions. However, the
reachability set of the original net is not equal to RS H (PN s ), it is only included in the hierarchically
generated reachability set. Before we compute RS(PN s ) as part of RS H (PN s ), the reachability graph
is represented in a compact form similarly to the compact representation of the reachability set.
First of all, we define the effect of transitions locally for LNs. Two different classes of transitions have
to be distinguished with respect to LN j.
H the set of local transitions in LN j.
j the set of transitions, which describe the communication between LN j and HN.
The effect of transitions at marking level is defined using Boolean matrices. As usual we assume
that multiplication of Boolean values is defined as Boolean and and summation as Boolean or. Thus let
t [~x; ~ y] be a n j (~x) \Theta n j (~y) matrix describing transitions in the reachability graph of LN j due to firing
transition t. Q
transition t is enabled in marking x 2 RS j [~x] and firing of t yields
successor marking y 2 RS j [~y]. All remaining elements in the matrices are 0. Since transitions t 2 LT j
do not modify the marking of the HN, Q
y and t 2 LT j . Furthermore, we define for
y and 0 n j (~x);n j (~y) otherwise. I n is a n \Theta n matrix with 1 at
the diagonal and 0 elsewhere. 0 n;m is a n \Theta m matrix with all elements equal to 0. The reason for this
definition is that a transition
does not modify the marking of LN j and cannot be disabled
by LN j. This is exactly described by matrices I and 0. Define for
If q j
enabled by LN j in marking x. For the HN we define q H
In all other cases q H
The matrices describe the effect of transitions with respect to the HN or a single LN. The next step
is to consider the effect of a transition with respect to the global net. Transition t is enabled in marking
Y
It is straightforward to prove this enabling condition. Since q j
enabling depends only on the marking of parts where the transition belongs to. A transition
is enabled if it is enabled in all parts simultaneously. In a similar way we can characterize transitions
between markings. Transition t is enabled in marking its firing yields successor
marking (y H ; y
Y
This relation allows us to characterize the reachability graph completely. To do this in a more elegant
way, we define Kronecker operations for matrices.
Definition 8 The Kronecker product
A\Omega B of a nA \Theta mA matrix A and a nB \Theta mB matrix B is defined
as a nAnB \Theta mAmB matrix
The Kronecker sum A \Phi B is defined for square matrices only as
A\Omega I nB \Thetan I nA \Thetan
The definition of Kronecker sums/products does not include the data type of the matrix elements.
Indeed all kinds of algebraic rings can be used. In particular we consider here Boolean or real values.
Since the Kronecker product is associative we can define a generalization for J matrices A j of dimension
O
Y
I l j
O
O
I
In the same way the Kronecker sum can be defined for n
as
I l j
O
O
I
Observe that C is a matrix with
columns. If we consider the number of
non-zero elements in C in terms of the number of non-zero element in A j and denote the number of
non-zero elements in a matrix A as nz(A), then we obtain
Y
Kronecker sums and product are a very compact way to represent huge matrices. Implicitly Kronecker
operations realize a linearization of a J dimensional number. Row indices of matrix C or D are computed
from the row indices of the matrices A j using the relation
Y
where x is the row index in C or D, x j is the row index in A j and n j is the number of rows of A j . In
the same way column indices are computed from the relation
Y
y is the column index in C or D, y j is the column index in A j and m j is the number of columns of A j .
These representations are denoted as mixed radix number representations. Obviously x (y) determines
all x j (y j ) and vice versa. For complementary information about Kronecker operations and mixed radix
number schemes we refer to [15], considering an example with
is recommended.
Mixed radix numbering schemes can as well be applied to number markings in RS H (PN s ). However,
we use a two level scheme, where the first number describes the HN marking and the second number is
computed from the numbers of LN markings. Thus marking
where
Y
Using this numbering scheme, RG H (PN s ) can be represented using Kronecker products of Boolean
matrices. We define Q H
t as the incidence matrix of the reachability graph considering only transition t.
Using the two level marking number, Q H
t has a block structure with n 2
H block matrices.
includes all transitions between markings belonging to HN marking x and markings
belonging to HN marking y due to transition t in the net. Each submatrix can be represented as a
Kronecker product of LN matrices.
O
This form describes a very compact representation of a huge matrix.
Assume that Q t is a Boolean matrix describing transition t at RS(PN ), then
after appropriate ordering of markings (i.e., markings from RG(PN) " RG H (PN) are followed by markings
from RG H (PN) n RG(PN )). If
the initial marking is part of RG(PN) " RG H (PN ), the above representation implies that successors of
reachable markings can be computed using matrices Q H
t and, consequently, also reachability analysis can
be performed using these matrices.
The incidence matrix of RG H (PN) can be represented as
For a compact representation, the Kronecker representation is definitely preferable. It can be applied in
various analysis algorithms as shown in the Sect. 6. Local transitions cause a specific matrix pattern of
nonzero elements. Since Q i
t [x; x] equals an identity matrix for t 2 LT j , j 6= i and Q j
y,
I l j
]\Omega I u j (x) if
collecting local transitions in one matrix
l [x
we obtain the following representation for a submatrix of Q H
l [x
does not distinguish between different local transitions of the same LN. If such a distinction
is necessary, transitions which have to be visible can be excluded from the sets LT j . In this way it is
possible to keep all relevant information in the representation of RG H (PN s ).
If we consider SPNs, transitions are enhanced by a transition rate. Thus Q contains real instead of
Boolean values. However, the Kronecker representation of the matrix is very similar. If all transitions
have marking independent transition rates - t , matrix Q H is given by
In this case the elements of Q t are interpreted as real values 1:0 and 0:0, respectively.
Example 5 The running example is rather small so that we can not expect practical gain from representing
RS or RG in a compositional way as proposed in this section. However, even for this simple example
the representation becomes more compact and the example allows us to clarify the general concepts.
The following table summarizes the number of markings in column RS and the number of transitions
in the RG in the corresponding column for the various nets considered here. Obviously the HN has a RG
which is significantly reduced compared to PN .
Marking description Successor markings
no ap1 ap2 p2 p4 p5 p6 p7 tnr no tnr no tnr no
22
26
Table
1: Reachable markings and possible transitions of the HN.
RS RG transitions
PN 254 622
All markings and possible transitions of the HN are shown in Tab. 1. Macro markings with respect to
LN 1 are defined by projection of the HN marking on the places ap1; p4; macro markings for LN 2 are
defined by projection of the HN marking on ap2; p5; p7. In both cases 9 macro markings are generated.
From the extended nets, RS j and the matrices Q j
are computed. For LN 1 a macro marking represents
in the average 2 markings. This small number is not surprising since LN 1 consists internally of two
places connected via single transition, such that macro markings only abstract from the internal place
where tokens reside. For the second LN more internal details are hidden by the aggregated description
used in the HN. Consequently, a macro marking of LN 2 represents in the average 9 detailed markings.
The Kronecker representation requires 195 transitions to represent the complete reachability graph with
622 transitions. Of course, this comparison does not consider overhead to store different matrices in the
Kronecker representation. However, the overhead depends on the number of transitions in [ST i and
the number of LNs. Both quantities are negligible compared to the number of markings if we consider
large nets. The hierarchically generated reachability set RS h includes 270 markings, which means that
markings are unreachable. We consider this point in the subsequent section.
6 Hierarchical Analysis Approaches
We now introduce analysis approaches which rely on the Kronecker representation of RG(PN ). In
particular it is necessary to introduce a method to characterize RS(PN) and not only a superset in form
of RS h (PN s ). The central idea of reachability analysis is that the numbering of markings in RS h (PN s )
is a perfect hash function for markings in RS(PN ). This has first been exploited for efficient reachability
analysis of SGSPNs, a class of generalized SPNs consisting of components synchronized via transitions,
in the work of Kemper [23]. We can use a similar approach here, but do not necessarily rely on it, see e.g.,
[12] as an alternative. Let
the number of markings in RS h when the marking of the
HN is M x . Let r[x] be a Boolean vector of length n(x) which is used to store results of the reachability
analysis. Thus r[x H ](x L marking
after termination that
Formally we use here one Boolean vector per HN marking,
but it is obviously possible to store all these vectors consecutively in a single Boolean vector of appropriate
length. Reachability analysis requires, apart from the vectors r[x] and the different matrices introduced
in the previous section, a set U to store unexplored markings, similar to the set U used in generate RS.
However, now U only has to store integer pairs instead of complete marking vectors.
Let be the number of the initial marking, then r[x 0H ](x 0L ) is initialized with 1, all remaining
vector components are zero. Additionally, U is initialized with the following algorithm
is used to determine reachable markings.
generate structured RS (PN)
while (U 6= ;) do
remove
to J do
for all y j with
l do // compute successor in subnet j
for all y H with Q H do // compute successor in subnet HN
for all j with t 2 T j do
L with
exists then (*)
else
if y L - 0 then
In the step indicated by ( ), the algorithm exploits the fact that firing of transition t always yields
a unique successor marking. Therefore each row of a matrix Q i
t can include at most one element. The
approach can be easily extended for PNs where different successor markings are possible. This situation
occurs in nets where probabilistic output bags for transitions are allowed. Since the algorithm computes
all successor markings of reachable markings, it is straightforward to prove that generate structured RS
generates RS(PN) and terminates when RS(PN) is finite which is the case here, since RS h (PN s ) has
assumed to be finite.
The remaining point is the comparison of generate structured RS and generate RS. As before we
assume that the reachability set contains n markings and in the average d transitions are possible in
each marking. The theoretical time complexity of generate RS is O(nd log 2 n) if insert and member
functions on RS use log 2 n operations. The complexity of generate structured RS is in O(nd), since the
Boolean vectors allow us to test in O(1) whether a marking has been reached before. The reduction by
a logarithmic factor seems to be not too much on a first glance. However, the approach is used for large
reachability sets such that this implies a reduction by at least an order of magnitude. Additionally the
constants behind the asymptotic complexity are much lower for generate structured RS. The reason is
that all operations are performed with simple integer operations, while several operations of generate RS
are time consuming. For example, if a new marking M is found in generate RS; a data structure to
hold M has to be allocated and inserted into the data structure storing the already generated markings.
Since this data structure is usually a tree, pointers have to be modified. In generate structured RS the
same operation only requires to set a bit in vector r. Thus usually we can expect an improvement
of run times which is around two orders of magnitude for large reachability sets. However, to apply
generate structured RS, PN has to be decomposed first and the reachability sets and matrices for the
subnets have to be generated. The complexity of both problems is for large nets much lower than
reachability analysis. This can also be seen in the example presented below.
Apart from time complexity, we also have to compare space complexity. Of course, also the difference
in memory requirements depends on the concrete example. However, if the net has been decomposed
into LNs with roughly identical reachability set sizes and the size of RS h (PN s ) and RN(PN) does
not differ too much (i.e., not by several orders of magnitude), then (8) assures that the size of the
LN/HN reachability sets and matrices is negligible compared with the size of the complete reachability
set and graph. Experiences show that the approach allows us to handle much larger reachability sets. An
additional advantage of generate structured RS is that we make use of secondary memory in a very efficient
way. Since vector r is structured into subvectors and successor markings are computed consecutively for
subvectors, it is possible to preload required subvectors from secondary memory.
After the reachability set has been computed by setting the values in vector r, it can be decided in
O(1) whether a marking is reachable or not. Furthermore, all successor markings for a marking can be
computed from the Kronecker representation for local transitions in a constant time and for others in a
time at most linear in the number of subnets. Since the Kronecker representation includes information
about transitions yielding to successors, even successors reachable by specific transitions can be computed.
Based on these basic steps, standard algorithms for model checking can be applied for the nets.
In a similar way the Kronecker representation can be exploited for the quantitative analysis of SPNs.
The basic step here is to realize the product of a vector with a sum of Kronecker products of matrices.
However, this step is already known in numerical analysis and can be combined with various iterative
numerical analysis techniques [3]. Thus the Kronecker representation allows the analysis of large SPN
models, which cannot be handled using standard means. For further details we refer to the literature
Example 6 The size of the running example is so small that is useless to compare runtimes for reachability
graph generation. Instead we briefly consider unreachable markings appearing in the hierarchical
representation. As already mentioned RS h contains 270 markings, but only 254 of them are reachable.
As an example for unreachable markings we consider markings of the form (0;
the vector includes the number of tokens on the places p1 \Gamma p7. For the places p8; p9; p10, we now consider
possible markings. Obviously all three places are part of a P-invariant such that the sum of tokens on
these places has to equal 2. In the hierarchical generated reachability set all possible distributions of 2
tokens over the places p8; p9; p10 are included. However, reachability analysis shows that only markings
are reachable where place p10 is empty.
The reason for this restriction can be explained by considering the behavior of the net in some more
detail. A token on p10 implies that t5 has fired after t4. But since p6 is empty, t3 fired after t5 and,
since p2 is non-empty t1 and t2 fired also after t5. Now after firing t2 a token resides on p5 which has
to be transferred to p4 by firing t4. However, this means that t4 fired after t5 and p10 has to be empty.
The restriction which assures that p10 has to be empty when the marking of the places p1 \Gamma p7 is as
shown above, is a global restriction which depends on the whole net. So it is not visible in an isolated
part and the above mentioned markings belong to RS h , but reachability analysis shows that they are not
reachable and are not part of RS.
Two other optimizations can be used to improve generate structured RS.
First optimization As noticed in [23] certain unnecessary interleavings due to internal transitions can
be eliminated. The idea is that local transitions in different LNs do not interfere. Thus if t 1 2 LT i
are both enabled in some marking, then the sequences t 1 t 2 and t 2 t 1 are both possible
and yield an identical successor marking. Consequently it is only necessary to consider one sequence.
More general, for a set of local transitions which belong all to different LNs and which
are enabled in some marking, only those transition sequences which are described by a subset of T and
where transitions occur in the order as described in T need to be considered. This reduces the number of
possible sequences from
l
\Delta l! to
l
\Delta . In this way the time complexity of reachability analysis
can be reduced.
Second optimization In [6] an approach is discussed which reduces time and space complexity. The
idea is to reduce a priori the marking sets of LN by combining some markings which are always together
reachable or not. As a simple example consider two markings x; y 2 RS(i) and a pair of transitions
is enabled in x and its firing yields y and t 0 is enabled in y yielding successor marking
x, then x is reachable whenever y is reachable and vice versa. We denote this as identical reachability
of markings. Obviously identical reachability holds for all markings in an irreducible subset of a matrix
l [x; y]. In [6] it is shown that this condition can be further relaxed. However, this extension is beyond the
scope of this paper. Markings which are identically reachable can be aggregated a priori. Aggregation in
this case means that a set of identically reachable markings is substituted by a single aggregate marking
such that all transitions entering or leaving one marking in the subset are substituted by transitions
entering/leaving the aggregate marking and transitions between markings in the subset are substituted
by transitions starting and ending in the aggregate marking. These transformations are easily performed
by adding in the matrices Q i
all rows and columns belonging to markings in the subset to be aggregated.
The size of RS(i) and RG(i) is reduced by this aggregation which implies that the size of RS h (PN s )
and also the effort for reachability analysis are reduced too. After reachability analysis the reachability
of an aggregated marking implies that all markings represented by this aggregated marking are also
reachable and vice versa; if the aggregated marking is not reachable, then the detailed markings are also
not reachable.
Both optimizations depend on the net which is considered. However, for most nets the effort for
reachability analysis can be reduced significantly.
7 An Application Example
jsj reg, total reg, non-tr P agg RS h RS(HN) max RS(j) percent RS(PN)
Table
2: Hierarchical representation for sequence s
The running example we considered so far is only useful to illustrate formal concepts, in order to
demonstrate applicability of our approach we consider the production cell of [24], which has been subject
to modeling and analysis by a variety of tools and which is known to be non-trivial. The production
cell model originates from an existing production cell in an industrial setting, which physically consists
of six components: a elevating rotary table, a rotable robot with two extendable arms, a traveling crane
gen hierarchy gen struct RS
jsj CPU user CPU user
111 2.5 3.0 73.3 74.0
112 2.6 3.0 64.9 65.0
Table
3: Computation characteristics for sequence s
and two conveyor belts. The production cell performs transportation and processing metal plates in a
(cyclic) pipeline. A feeding conveyor belt transports metal plates to the elevating table, the table lifts
plates for the robot, the robot inserts plates into the press and takes them after pressing from the press
onto the second conveyor belt. Originally plates leave the system by the second belt, but in order to have
a closed system, the crane is installed to put plates from the second belt onto the feeding belt, such that
the number of plates within the system is constant. Thanks to the work of Heiner et al. [18, 19] a Petri
net model exists, which considers processing of 5 plates. Refinement is used to organize a model of this
size, however the dynamic behavior of the model is not defined unless all refined subnets are available in
full detail. This kind of hierarchy is very common for modeling purposes, but useless in terms of analysis.
Hence our analysis starts from a flat Place/Transition net with 231 places and 202 transitions 3 . From
[19] it is known that the net is live and 1-bounded. The reachability set contains 1,657,242 markings and
the reachability graph 6,746,379 transitions.
The algorithm to derive a hierarchy starts from a partition into minimal regions and considers a
sequence s, which starts with transitions being internal in minimal regions (which is the case for 74
transitions in our example), subsequently it considers small regions first. Fig. 6 shows how the total
number of regions decreases once the internal regions have been considered. On the other hand, the
number of non-trivial regions increases in an initial phase since the algorithm prefers small regions and
finally decreases when there are no trivial regions left and non-trivial regions are merged. Table 2 indicates
the influence of s on the hierarchical representation of RS, it gives the number of regions (non-trivial and
in total), the number of aggregated places P agg , cardinalities of the hierarchical reachability set RS h , the
reachability set of the high level net RS(HN ), and the maximal number of markings observed among
the low level nets. The quality of the whole construction is shown in column "percent RS(PN )", which
gives the reachable fraction of RS h . Table 3 gives corresponding computation times in seconds for the
computation of the two-level hierarchy and the subsequent computation of the reachability set RS(PN)
contained in RS h (PN ), times are given as CPU time and user (wall clock) time. These times have been
observed on a SPARCstation 4 with 64 MB main memory, 890 MB virtual memory, and 110 MHz CPU.
Obviously computation times are uncritical if the number of aggregated places does not explode.
It is worth mentioning that it takes slightly more than a minute to generate the complete reachability
set and reachability graph and represent them in a very space efficient way. About 6 Megabyte memory
space are necessary to generate and represent RG and RS. These values are excellent compared to conventional
RG generation algorithms. In [18] the same model has been analyzed on a similar workstation
using different PN analysis tools. RG generation with the tool PROD needs about 14 hours (see [18]).
The small runtimes and storage requirements show that much larger systems can be handled with the
approach. We have also analyzed an open version of the production cell for which other tools where not
able to generate RS (see [18]). For this version our method needs about 3 minutes real time to generate
RS with 2,776,936 markings and RG with 13,152,132 arcs.
As already noticed in [19], computation of a generating set of semi-positive P-invariants is difficult
for this net. Our approach is closely related to invariant computation: if we compute an extended net
for a sequence covering all transitions T , we obtain a generating set of P-invariants as well. However this
3 We thank J. Spranger for translating the model into the APNN format [2] used in our implementation.
length of sequence s10305070
regions
non-trivial regions
Figure
Total number of regions, number of non-trivial regions
length of sequence s50150250350450550
places
regions
Figure
7: Number of aggregated places, number of non-trivial regions
extreme is not suitable and we consider only a subset of transitions, in order to remain some activity in
the HN. From a pragmatic point of view the approach allows us to consider those transitions which can
be handled with acceptable computational costs and stop the derivation of a hierarchy if it becomes too
expensive. Fig. 7 clearly indicates that a careful selection of transitions can avoid high computational
costs. However there is a sharp increase after 108 steps, and the hierarchy derivation stops after 113
steps. For a P-invariant computation 202 steps are necessary, hence Fig. 7 also illustrates the difficulties
for invariant computation observed in [19]. According to the results in Table 2, the number of regions
and a limit for the number of aggregated places give suitable parameters to stop the automatic hierarchy
generation where it makes sense.
Conclusions
We have proposed a new approach for the efficient generation and compact representation of reachability
sets and graphs of large PNs. In contrast to other approaches, the technique can be applied to general
nets without definition of a hierarchical structure and without inherent symmetries. The structuring
of the PN into asynchronously interacting regions is done automatically by an algorithm which uses a
basic step related to invariant computation to make a transition internal to a region. The algorithm
considers a sequence of distinct transitions which can be arbitrary in principle. For our implementation
we use some heuristic rules in order to structure a net into regions of approximately the same size. The
algorithm stops once a user given number of regions has been obtained. Usually the number of regions
should not be chosen too large, to avoid a too complex HN. For nets covered by P-invariants termination
is guaranteed, however we cannot ensure termination for general PNs. The problem is that reachability
sets of some part, HN or a LN, can become unbounded, even if the reachability set of the complete net
is bounded. This problem can not occur for nets which are covered by P-invariants.
The non-trivial example considered in this paper illustrates our experience with the algorithm exercised
on a set of examples: the new approach allows the time and space efficient generation and
representation of huge reachability sets and graphs. This is, of course, a step towards the analysis of
complex PNs. Our current research aims at the integration of the algorithms for model-checking with
the Kronecker representation of the reachability graph. First results indicate that this approach allows
to analyze much larger nets than conventional means. Additionally, the Kronecker representation can
be used for the efficient analysis of SPNs using numerical analysis techniques. For an overview of these
techniques we refer to [7].
--R
State space construction and steady state solution of GSPNs on a shared-memory multiprocessor
Abstract Petri net notation
Complexity of Kronecker operations and sparse matrices with applications to the solution of Markov models
Transformation and decomposition of nets
Hierarchical high level
Hierarchical Structuring of Superposed GSPNs
Structured Analysis Approaches for Large Markov Chains
States and Beyond
Parallel state space exploration for GSPN models
On well-formed coloured nets and their symbolic reachability graph
Distributed simulation of
Storage alternatives for large structured state spaces
Modular state space analysis of coloured
Distributed state-space generation of discrete-state stochastic models
IEEE Trans.
A. reduction theory for coloured
Asynchronous composition of high level
Petri net based design and analysis of reactive systems
A case study in developing control software on manufacturing systems
Reachability trees for high-level
Coloured
Coloured
Reachability analysis based on structured representations
Formal development of reactive systems
Analysis of large GSPN models: a distributed solution tool
A simple and fast algorithm to obtain all invariants of a generalized Petri net
Performance analysis using stochastic
Hierarchical reachability graph generation of bounded
Petri net analysis using Boolean manipulation
A comparative study of methods for efficient reachability analysis
The numerical solution of stochastic automata networks
A class of modular and hierarchical cooperating systems
Compositional analysis with place bordered subnets
State of the art report: stubborn sets
--TR
Automatic verification of finite-state concurrent systems using temporal logic specifications
Transformations and decompositions of nets
A reduction theory for coloured nets
The concurrency workbench
Using partial orders for the efficient verification of deadlock freedom and safety properties
A symbolic reachability graph for coloured Petri nets
Colored Petri nets (vol.
Automated parallelization of discrete state-space generation
On generating a hierarchy for GSPN analysis
Structured analysis approaches for large Markov chains
Communication and Concurrency
Distributed Simulation of Petri Nets
Hierarchical Reachability Graph of Bounded Petri Nets for Concurrent-Software Analysis
Structured Solution of Asynchronously Communicating Stochastic Modules
Hierarchical Structuring of Superposed GSPNs
Application and Theory of Petri Nets
On Limits and Possibilities of Automated Protocol Analysis
An analysis of bistate hashing
Saturation
Modular State Space Analysis of Coloured Petri Nets
Parallel State Space Exploration for GSPN Models
A Toolbox for the Analysis of Discrete Event Dynamic Systems
Reachability Analysis Based on Structured Representations
{SC}*ECS
A survey of equivalence notions for net based systems
A Simple and Fast Algorithm to Obtain All Invariants of a Generalized Petri Net
Reliable Hashing without Collosion Detection
Compositional Analysis with Place-Bordered Subnets
Hierarchical High Level Petri Nets for Complex System Analysis
Superposed Generalized Stochastic Petri Nets
Petri Net Analysis Using Boolean Manipulation
Storage Alternatives for Large Structured State Spaces
State Space Construction and Steady--State Solution of GSPNs on a Shared--Memory Multiprocessor
Analysis of large GSPN models
--CTR
Michael Muskulus , Daniela Besozzi , Robert Brijder , Paolo Cazzaniga , Sanne Houweling , Dario Pescini , Grzegorz Rozenberg, Cycles and communicating classes in membrane systems and molecular dynamics, Theoretical Computer Science, v.372 n.2-3, p.242-266, March, 2007 | invariant analysis;hierarchical structure;reachability graph;reachability set;petri nets |
607620 | Latent Semantic Kernels. | Kernel methods like support vector machines have successfully been used for text categorization. A standard choice of kernel function has been the inner product between the vector-space representation of two documents, in analogy with classical information retrieval (IR) approaches.Latent semantic indexing (LSI) has been successfully used for IR purposes as a technique for capturing semantic relations between terms and inserting them into the similarity measure between two documents. One of its main drawbacks, in IR, is its computational cost.In this paper we describe how the LSI approach can be implemented in a kernel-defined feature space.We provide experimental results demonstrating that the approach can significantly improve performance, and that it does not impair it. | Introduction
Kernel-based learning methods (KMs) are a state-of-the-art class of learning algo-
rithms, whose best known example is Support Vector Machines (SVMs) [3]. In this
approach, data items are mapped into high-dimensional spaces, where information
about their mutual positions (inner products) is used for constructing classification,
regression, or clustering rules. They are modular systems, formed by a general purpose
learning module (e.g. classification or clustering) and by a data-specific ele-
ment, called the kernel, that acts as an interface between the data and the learning
machine by defining the mapping into the feature space.
Kernel-based algorithms exploit the information encoded in the inner-product between
all pairs of data items. Somewhat surprisingly, this information is sufficient
to run many standard machine learning algorithms, from the Perceptron Convergence
algorithm to Principal Components Analysis (PCA), from Ridge Regression
to nearest neighbour. The advantage of adopting this alternative representation
is that often there is an efficient method to compute inner products between very
complex, in some cases even infinite dimensional, vectors. Since the explicit representation
of feature vectors corresponding to data items is not necessary, KMs
have the advantage of accessing feature spaces that would otherwise be either too
expensive or too complicated to represent. Strong model selection techniques based
on Statistical Learning Theory [26] have been developed for such systems in order
to avoid overfitting in high dimensional spaces.
It is not surprising that one of the areas where such systems work most naturally is
text categorization, where the standard representation of documents is as very high-dimensional
vectors, and where standard retrieval techniques are based precisely on
the inner-products between vectors. The combination of these two methods has
been pioneered by Joachims [10], and successively explored by several others [6, 11].
This approach to documents representation is known as the 'bag of words', and is
based on mapping documents to large vectors indicating which words occur in the
text. The vectors have as many dimensions as terms in the corpus (usually several
thousands), and the corresponding entries are zero if a term does not occur in the
document at hand, and positive otherwise. Two documents are hence considered
similar if they use (approximately) the same terms. Despite the high dimensionality
of such spaces (much higher than the training set size), Support Vector Machines
have been shown to perform very well [10]. This paper investigates one possible
avenue for extending Joachims' work, by incorporating more information in the
kernel.
When used in Information retrieval (IR) this representation is known to suffer from
some drawbacks, in particular the fact that semantic relations between terms are
not taken into account. Documents that talk about related topics using different
terms are mapped to very distant regions of the feature space. A map that captures
some semantic information would be useful, particularly if it could be achieved
with a "semantic kernel", that computes the similarity between documents by also
considering relations between different terms.
Using a kernel that somehow takes this fact into consideration would enable the
system to extract much more information from documents. One possible approach
is the one adopted by [23], where a semantic network is used to explicitly compute
the similarity level between terms. Such information is encoded in the kernel, and
defines a new metric in the feature space, or equivalently a further mapping of the
documents into another feature space.
In this paper we propose to use a technique known in Information Retrieval as
Latent Semantic Indexing (LSI) [4]. In this approach, the documents are implicitly
mapped into a "semantic space", where documents that do not share any terms
can still be close to each other if their terms are semantically related. The semantic
similarity between two terms is inferred by an analysis of their co-occurrence pat-
terns: terms that co-occur often in the same documents are considered as related.
This statistical co-occurrence information is extracted by means of a Singular Value
Decomposition of the term by document matrix, in the way described in Section 3.
We show how this step can be performed implicitly in any kernel-induced feature
space, and how it amounts to a 'kernel adaptation' or `semantic kernel learning'
step. Once we have fixed the dimension of the new feature space, its computation
is equivalent to solving a convex optimization problem of eigenvalue decomposition,
so it has just one global maximum that can be found efficiently. Since eigenvalue
decomposition can become expensive for very large datasets we develop an approximation
technique based on the Gram-Schmidt orthogonalisation procedure. In
practice this method can actually perform better than the LSI method.
We provide experimental results with text and non-text data showing that the
techniques can deliver significant improvements on some datasets, and certainly
never reduce performance. Then we discuss their advantages, limitations, and their
relationships with other methods.
2. Kernel Methods for Text
Kernel methods are a new approach to solving machine learning problems. By
developing algorithms that only make use of inner products between images of
different inputs in a feature space, their application becomes possible to very rich
feature spaces provided the inner products can be computed. In this way they avoid
the need to explicitly compute the feature vector for a given input. One of the key
advantages of this approach is its modularity: the decoupling of algorithm design
and statistical analysis from the problem of creating appropriate function/feature
spaces for a particular application. Furthermore, the design of kernels themselves
can be performed in a modular fashion: simple rules exist to combine or adapt
basic kernels in order to construct more complex ones, in a way that guarantees
that the kernel corresponds to an inner product in some feature space. The main
result of this paper can also be regarded as one such kernel adaptation procedure.
Though the idea of using a kernel defined feature space is not new [1], it is only
recently that its full potential has begun to be realised. The first problem to be
considered was classification of labelled examples in the so-called Support Vector
Machine [2, 3], with the corresponding statistical learning analysis described in [20].
However, this turned out to be only the beginning of the development of a portfolio
of algorithms for clustering [17] using Principal Components Analysis (PCA) in
the feature space, regression [24], novelty detection [19], and ordinal learning [7].
At the same time links have been made between this statistical learning approach,
the Bayesian approach known as Gaussian Processes [13], and the more classical
Krieging known as Ridge Regression [16], hence for the first time providing a direct
link between these very distinct paradigms.
In view of these developments it is clear that defining an appropriate kernel function
allows one to use a range of different algorithms to analyse the data concerned
potentially answering many practical prediction problems. For a particular application
choosing a kernel corresponds to implicitly choosing a feature space since
the kernel function is defined by
for the feature map OE. Given a training set g, the information
available to kernel based algorithms is contained entirely in the matrix of inner
products
known as the Gram or kernel matrix. This matrix represents a sort of 'bottleneck'
for the information that can be exploited: by operating on the matrix, one can in
fact 'virtually' recode the data in a more suitable manner.
The solutions sought are linear functions in the feature space
for some weight vector w, where 0 denotes the transpose of a vector or matrix.
The kernel trick can be applied whenever the weight vector can be expressed as a
linear combination of the training points,
implying that we can
express f as follows
Given an explicit feature map OE we can use equation (1) to compute the corresponding
kernel. Often, however, methods are sought to provide directly the value of the
kernel without explicitly computing OE. We will show how many of the standard
information retrieval feature spaces give rise to a particularly natural set of kernels.
Perhaps the best known method of this type is referred to as the polynomial kernel.
Given a kernel k the polynomial construction creates a kernel - k by applying a
polynomial with positive coefficients to k, for example consider
for fixed values of D and integer p. Suppose the feature space of k is F , then
the feature space of - k is indexed by t-tuples of features from F , for
Hence, through a relatively small additional computational cost (each time an inner
product is computed one more addition and exponentiation is required) the algorithms
are being applied in a feature space of vastly expanded expressive power.
As an even more extreme example consider the Gaussian kernel ~ k that transforms
the kernel k as follows:
whose feature space has infinitely many dimensions.
3. Vector Space Representations
Given a document, it is possible to associate with it a bag of terms (or bag of
words) by simply considering the number of occurrences of all the terms it contains.
Typically words are "stemmed" meaning that the inflection information contained
in the last few letters is removed.
A bag of words has its natural representation as a vector in the following way. The
number of dimensions is the same as the number of different terms in the corpus,
each entry of the vector is indexed by a specific term, and the components of the
vector are formed by integer numbers representing the frequency of the term in the
given document. Typically such a vector is then mapped into some other space,
where the word frequency information is merged with other information (e.g. word
importance, where uninformative words are given low or no weight).
In this way a document is represented by a (column) vector d in which each entry
records how many times a particular word stem is used in the document. Typically
d can have tens of thousands of entries, often more than the number of documents.
Furthermore, for a particular document the representation is typically extremely
sparse, having only relatively few non-zero entries.
In the basic vector-space model (BVSM), a document is represented by a vertical
vector d indexed by all the elements of the dictionary, and a corpus by a matrix D,
whose columns are indexed by the documents and whose rows are indexed by the
We also call the data matrix D the "term by document"
matrix. We define the "document by document" matrix to be and the
"term by term" matrix to be
If we consider the feature space defined by the basic vector-space model, the corresponding
kernel is given by the inner product between the feature vectors
In this case the Gram matrix is just the document by document matrix. More gen-
erally, we can consider transformations of the document vectors by some mapping
OE. The simplest case involves linear transformations of the type
P is any appropriately shaped matrix. In this case the kernels have the form
We will call all such representations Vector Space Models (VSMs). The Gram
matrix is in this case given by D 0 P 0 PD that is by definition symmetric and positive
definite. The class of models obtained by varying the matrix P is a very natural
one, corresponding as it does to different linear mappings of the standard vector
space model, hence giving different scalings and projections. Note that Jiang and
Littman [9] use this framework to present a collection of different methods, although
without viewing them as kernels. Throughout the rest of the paper we will use P
to refer to the matrix defining the VSM. We will describe a number of different
models in each case showing how an appropriate choice of P realises it as VSM.
Basic Vector Space Model
The Basic Vector Space Model (BVSM) was introduced in 1975 by Salton et al.
[15] (and used as a kernel by Joachims [10]) and uses the vector representation with
no further mapping. In other words the VSM matrix I in this case. The performance
of retrieval systems based on such a simple representation is surprisingly
good. Since the representation of each document as a vector is very sparse, special
techniques can be deployed to facilitate the storage and the computation of dot
products between such vectors.
A common map P is obtained by considering the importance of each term in a
given corpus. The VSM matrix is hence a diagonal, whose entries P (i; i) are the
weight of the term i. Several methods have been proposed, and it is known that
this has a strong influence on generalization [11]. Often P (i; i) is a function of the
inverse document frequency idf
, that is the total number of documents
in the corpus divided by the number of documents that contain the given term. So
if for example a word appears in each document, it would not be regarded as a very
informative one. Its distance from the uniform distribution is a good estimation
of its importance, but better methods can be obtained by studying the typical
term distributions within documents and corpora. The simplest method for doing
this is just given by P (i; Other measures can be obtained from
information theoretic quantities, or from empirical models of term frequency. Since
these measures do not use label information, they could also be estimated from an
external, larger unlabelled corpus, that provides the background knowledge to the
system.
As described in the previous section as soon as we have defined a kernel we can
apply the polynomial or Gaussian construction to increase its expressive power.
Joachims [10] and Dumais et al. [5] have applied this technique to the basic vector
space model for a classification task with impressive results. In particular, the use
of polynomial kernels can be seen as including features for each tuple of words up
to the degree of the chosen polynomial.
One of the problems with this representation is that it treats terms as uncorrelated,
assigning them orthogonal directions in the feature space. This means that it can
only cluster documents that share many terms. But in reality words are correlated,
and sometimes even synonymous, so that documents with very few common terms
can potentially be on closely related topics. Such similarities cannot be detected
by the BVSM. This raises the question of how to incorporate information about
semantics into the feature map, so as to link documents that share related terms?
One idea would be to perform a kind of document expansion, adding to the expanded
version all synonymous (or closely related) words to the existing terms.
Another, somehow similar, method would be to replace terms by concepts. This information
could potentially be gleaned from external knowledge about correlations,
for example from a semantic network. There are, however, other ways to address
this problem. It is also possible to use statistical information about term-term correlations
7derived from the corpus itself, or from an external reference corpus. This
approach forms the basis of Latent Semantic Indexing.
In the next subsections we will look at two different methods, in each case showing
how they can be implemented directly through the kernel matrix, without the
need to work explicitly in the feature space. This will allow them to be combined
with other kernel techniques such as the polynomial and Gaussian constructions
described above.
Generalised Vector Space Model
An early attempt to overcome the limitations of BVSMs was proposed by Wong et
al. [27] under the name of Generalised VSM, or GVSM. A document is characterised
by its relation to other documents in the corpus as measured by the BVSM. This
method aims at capturing some term-term correlations by looking at co-occurrence
information: two terms become semantically related if they co-occur often in the
same documents. This has the effect that two documents can be seen as similar
even if they do not share any terms. The GVSM technique can provide one such
metric, and it is easy to see that it also constitutes a kernel function.
Given the term by document data matrix D, the GVSM kernel is given by
The matrix DD 0 is the term by term matrix and has a nonzero ij entry if and only
if there is a document in the corpus containing both the i-th and the j-th terms.
So two terms co-occurring in a document are considered related. The new metric
takes this co-occurrence information into account.
The documents are mapped to a feature space indexed by the documents in the
corpus, as each document is represented by its relation to the other documents in
the corpus. For this reason it is also known as a dual space method [22]. In the
common case when there are less documents than terms, the method will act as a
bottle-neck mapping forcing a dimensionality reduction. For the GVSM the VSM
matrix P has been chosen to be D 0 the document by term matrix.
Once again the method can be combined with the polynomial and Gaussian kernel
construction techniques. For example the degree p polynomial kernel would have
features for each (- p)-tuple of documents with a non-zero feature for a document
that shares terms with each document in the tuple. To our knowledge this combination
has not previously been considered with either the polynomial or the Gaussian
construction.
Semantic Smoothing for Vector Space Models
Perhaps a more natural method of incorporating semantic information is by directly
using an external source, like a semantic network. In this section we briefly describe
one such approach. Siolas and d'Alch'e-Buc [23] used a semantic network (Word-net
[12]) as a way to obtain term-similarity information. Such a network encodes for
each word of a dictionary its relation with the other words in a hierarchical fashion
(e.g. synonym, hypernym, etc). For example both the word 'husband' and `wife'
are special cases of their hypernym 'spouse'. In this way, the distance between
two terms in the hierarchical tree provided by Wordnet gives an estimation of their
semantic proximity, and can be used to modify the metric of the vector space when
the documents are mapped by the bag-of-words approach.
Siolas and d'Alch'e-Buc [23] have included this knowledge into the kernel by hand-crafting
the entries in the square VSM matrix P . The entries
the semantic proximity between the terms i and j. The semantic proximity is defined
as the inverse of their topological distance in the graph, that is the length of
the shortest path connecting them (but some cases deserve special attention). The
modified metric then gives rise to the following kernel
or to the following distance
Siolas and d'Alch'e-Buc used this distance in order to apply the Gaussian kernel
construction described above, though a polynomial construction could equally well
be applied to the kernel.
Siolas and d'Alch'e-Buc used a term-term similarity matrix to incorporate semantic
information resulting in a square matrix P . It would also be possible to use a
concept-term relation matrix in which the rows would be indexed by concepts rather
than terms. For example one might consider both 'husband' and `wife' examples
of the concept 'spouse'. The matrix P would in this case no longer be square
symmetric. Notice that GVSMs can be regarded as a special case of this, when the
concepts correspond to the documents in the corpus, that is a term belongs to the
i-th 'concept' if it occurs in document d i .
4. Latent Semantic Kernels
Latent Semantic Indexing (LSI) [4] is a technique to incorporate semantic information
in the measure of similarity between two documents. We will use it to construct
kernel functions. Conceptually, LSI measures semantic information through
co-occurrence analysis in the corpus. The technique used to extract the information
relies on a Singular Value Decomposition (SVD) of the term by document matrix.
The document feature vectors are projected into the subspace spanned by the first
singular vectors of the feature space. Hence, the dimension of the feature space
is reduced to k and we can control this dimension by varying k. We can define
a kernel for this feature space through a particular choice of the VSM matrix P ,
and we will see that P can be computed directly from the original kernel matrix
without direct computation of the SVD in the feature space.
In order to derive a suitable matrix P first consider the term-document matrix D
and its SVD decomposition
where \Sigma is a diagonal matrix with the same dimensions as D, and U and V are
orthogonal (ie U I). The columns of U are the singular vectors of the feature
space in order of decreasing singular value. Hence, the projection operator onto the
first k dimensions is given by
I k is the identity matrix with
only the first k diagonal elements nonzero and U k the matrix consisting of the first
k columns of U . The new kernel can now be expressed as
The motivation for this particular mapping is that it identifies highly correlated
dimensions: i.e. terms that co-occur very often in the same documents of the corpus
are merged into a single dimension of the new space. This creates a new similarity
metric based on context information. In the case of LSI it is also possible to
isometrically re-embed the subspace back into the original feature space by defining
P as the square symmetric (U k U 0 This gives rise to the same kernel,
since
We can then view -
P as a term-term similarity matrix making LSI a special case of
the semantic smoothing described in Solias and d'Alch'e-Buc [23]. While they need
to explicitly work out all the entries of the term-by-term similarity matrix with the
help of a semantic network, however, we can infer the semantic similarities directly
from the corpus, using co-occurrence analysis.
What is more interesting for kernel methods is that the same mapping, instead
of acting on term-term matrices, can be obtained implicitly by working with the
smaller document-document Gram matrix. The original term by document matrix
D gives rise to the kernel matrix
since the feature vector for document j is the j-th column of D. The SVD decomposition
is related to the eigenvalue decomposition of K as follows
so that the i-th column of V is the eigenvector of K, with corresponding eigenvalue
. The feature space created by choosing the first k singular values in
the LSI approach corresponds to mapping a feature vector d to the vector UI k U 0 d
and gives rise to the following kernel matrix
where k is the matrix with diagonal entries beyond the k-th set to zero. Hence,
the new kernel matrix can be obtained directly from K by applying an eigenvalue
decomposition of K and remultiplying the component matrices having set all but
the first k eigenvalues to zero. Hence, we can obtain the kernel corresponding to
the LSI feature space without actually ever computing the features. The relations
of this computation to kernel PCA [18] are immediate. By a similar analysis it is
possible to verify that we can also evaluate the new kernel on novel inputs again
without reference to the explicit feature space. In order to evaluate the learned
functions on novel examples, we must show how to evaluate the new kernel - k
between a new input d and a training example, - k(d d). The function we wish to
evaluate will have the form
d)
d) i
The expression still, however, involves the feature vector d which we would like to
avoid evaluating explicitly. Consider the vector
of inner products between the new feature vector and the training examples in the
original space. These inner products can be evaluated using the original kernel.
But now we have
I
d,
showing that we can evaluate f(d) as follows
t.
Hence to evaluate f on a new example, we first create a vector of the inner products
in the original feature space and then take its inner product with the precomputed
row vector ff 0 V I k V 0 . None of this computation involves working directly in the
feature space.
The combination of the LSK technique with the polynomial or Gaussian construction
opens up the possibility of performing LSI in very high dimensional feature
spaces, for example indexed by tuples of terms. Experiments applying this approach
are reported in the experimental section of this paper. If we think of the
polynomial mapping as taking conjunctions of terms, we can view the LSK step
as a soft disjunction, since the projection links several different conjunctions into
a single concept. Hence, the combination of the polynomial mapping followed by
an LSK step produces a function with a form reminiscent of a disjunctive normal
form.
Alternatively one could perform the LSK step before the polynomial mapping (by
just applying the polynomial mapping to the entries of the Gram matrix obtained
after the LSK step), obtaining a space indexed by tuples of concepts. Here the
function obtained will be reminiscent of a conjunctive normal form. We applied
this approach to the Ionosphere data but obtained no improvement in performance.
We conjecture that the results obtained will depend strongly on the fit of the style
of function with the particular data.
The main drawback of all such approaches is the computational complexity of
performing an eigenvalue decomposition on the kernel matrix. Although the matrix
is smaller than the term by document matrix it is usually no longer sparse. This
makes it difficult to process training sets much larger than a few thousand examples.
We will present in the next section techniques that get round this problem by
evaluating an approximation of the LSK approach.
5. Algorithmic Techniques
All the experiments were performed using the eigenvalue decomposition routine
provided with Numerical Recipes in C [14].
The complete eigen-decomposition of the Kernel matrix is an expensive step, and
where possible one should try to avoid it when working with real world data. More
efficient methods can be developed to obtain or approximate the LSK solution.
We can view the LSK technique as one method of obtaining a low rank approximation
of the kernel matrix. Indeed the projection onto the first k eigenvalues is
the rank k approximation which minimises the norm of the resulting error matrix.
But projection onto the eigensubspaces is just one method of obtaining a low-rank
approximation.
We have also developed an approximation strategy, based on the Gram-Schmidt
decomposition. A similar approach to unsupervised learning is described by Smola
et al. [25].
The projection is built up as the span of a subset of (the projections of) a set of k
training examples. These are selected by performing a Gram-Schmidt orthogonalisation
of the training vectors in the feature space. Hence, once a vector is selected
the remaining training points are transformed to become orthogonal to it. The next
vector selected is the one with the largest residual norm. The whole transformation
is performed in the feature space using the kernel mapping to represent the vectors
obtained. We refer to this method as the GK algorithm. Table 1 gives complete
pseudo-code for extracting the features in the kernel defined feature space. As with
the LSK method it is parametrised by the number of dimensions T selected.
Table
1. The GSK Algorithm
Given a kernel k, training set d
do
to T do
do
return feat[i; j] as the j-th feature of input
To classify a new example x:
to T do
return newfeat[j] as the j-th feature of the example
5.1. Implicit Dimensionality Reduction
An interesting solution to the problem of approximating the Latent Semantic solution
is possible in the case in which we are not directly interested in the low-rank
matrix, unlike in the information retrieval case, but we only plan to use it as a
kernel in conjunction with an optimization problem of the type:
where H is the Hessian, obtained by pre- and post-multiplying the Gram matrix
by the diagonal matrix containing the f+1; \Gamma1g labels,
Note that H and K have the same eigenvalues since if
It is possible to easily (and cheaply) modify the Gram matrix so as to obtain nearly
the same solution that one would obtain by using a (much more expensive) low
rank approximation.
The minimum of this error function occurs at the point ff which satisfies q+Hff
0. If the matrix H is replaced by H then the minimum moves to a new point
e
ff which satisfies q us consider the expansion of H in its
eigenbasis: and the expansions of ff and e
ff in the same basis:
Substituting into the above formulae and equating coefficients of the i-th eigenvalue
gives
ff
implying that e
ff
The fraction in the above equation is a squashing function, approaching zero for
values of - i - and approaching 1 for - i AE -. In the first case e
in the second case e ff i - ff
. The overall effect of this map, if the parameter - is
chosen carefully in a region of the spectrum where the eigenvalues decrease rapidly,
is to effectively project the solution onto the space spanned by the eigenvectors of
the larger eigenvalues.
From an algorithmic point of view this is much more efficient than explicitly performing
the low-rank approximation by computing the eigenvectors.
This derivation not only provides a cheap approximation algorithm for the latent
semantic kernel. It also highlights an interesting connection between this algorithm
and the 2-norm soft margin algorithm for noise tolerance, that also can be obtained
by adding a diagonal to the kernel matrix [21]. But note that there are several
approximations in this view since for example the SVM solution is a constrained
optimisation, where the ff i 's are constrained to be positive. In this case the effect
may be very different if the support vectors are nearly orthogonal to the eigenvectors
corresponding to large eigenvalues. The fact that the procedure is distinct from a
standard soft margin approach is borne out in the experiments that are described
in the next section.
6. Experimental Results
We empirically tested the proposed methods both on text and on non-text data,
in order to demonstrate the general applicability of the method, and to test its
effectiveness under different conditions. The results were generally positive, but
in some cases the improvements are not significant or not worth the additional
computation. In other cases there is a significant advantage in using the Latent
Semantic or Gram-Schmidt kernels, and certainly their use never hurts performance.
6.1. Experiments on Text Data
This section describes a series of systematic experiments performed on text data.
We selected two text collections, namely Reuters and Medline that are described
below.
Datasets
Reuters21578 We conducted the experiments on a set of documents containing
stories from Reuters news agency, namely the Reuters data-set. We used Reuters-
21578, the newer version of the corpus. It was compiled by David Lewis in 1987
and is publicly available at
http://www.research.att.com/lewis.
To obtain a training set and test set there exists different splits of the corpus. We
used the Modified Apte ("ModeApte") split. The "ModeApte" split comprises 9603
training and 3299 test documents. A Reuters category can contain as few as 1 or
as many as 2877 documents in the training set. Similarly a test set category can
have as few as 1 or as many as 1066 relevant documents.
Medline1033 The Medline1033 is the second data-set which was used for experi-
ments. This dataset comprises of 1033 medical documents and queries obtained
from National library of medicine. We focused on query23 and query20. Each of
these two queries contain 39 relevant documents. We selected randomly 90% of
the data for training the classifier and 10% for evaluation, while always having 24
relevant documents in the training set and 15 relevant documents in the test set.
We performed 100 random splits of this data.
Experiments
The Reuters documents were preprocessed. We removed the punctuation and the
words occurring in the stop list and also applied Porter stemmer to the words.
We weighted the terms according to a variant of the tfidf scheme. It is given by
here tf represents term frequency, df is used for the document
frequency and m is the total number of documents. The documents have
unit length in the feature space.
We preprocessed the Medline documents by removing stop words and punctuation
and weighted the words according to the variant of tfidf described in the preceding
paragraph. We normalised the documents so that no bias can occur because of the
length of the documents. For evaluation we used the F1 performance measure. It
is given by 2pr=(p is the precision and r is the recall.
The first set of experiment was conducted on a subset of 3000 documents of
Reuters21578 data set. We selected randomly 2000 documents for training and
the remaining 1000 documents were used as a test set. We focused on the top
0.97dimension
baseline
Figure
1. Generalisation performance of SVM with GSK, LSK and linear kernel for earn.
5 Reuters categories (earn, acq, money-fx, grain, crude). We trained a binary
classifier for each category and evaluated its performance on new documents. We
repeated this process 10 times for each category. We used an SVM with linear
kernel for the baseline experiments. The parameter C that controls the trade off
between error and maximisation of margin was tuned by conducting preliminary
experiments. We chose the optimal value by conducting experiments on ten splits
of one category. We ran an SVM not only in the reduced feature space but also
in a feature space that has full dimension. The value of C that showed the best
results in the full space was selected and used for all further experiments. For the
Medline1033 text corpus we selected the value of C by conducting experiments on
one split of the data. We ran an SVM in a feature space that has full dimension.
The optimal value of C that showed best results was selected. Note that we did not
use that split for further experiments. This choice does not seem perfect but on the
basis of our experimental observation on Reuters, we conclude that this method
gives an optimal value of C.
The results of our experiments on Reuters are shown in Figures 1 through 4. Note
that these results are averaged over 10 runs of the algorithm. We started with a
small dimensional feature space. We increased the dimensionality of the feature
space in intervals by extracting more features.
These figures demonstrate that the performance of the LSK method is comparable
to the baseline method. The generalisation performance of an SVM classifier varies
by varying the dimensionality of the semantic space. By increasing the value of
k, F1 numbers rise reaching a maximum and then falls to a number equivalent to
the baseline method. However this maximum is not substantially different from
the baseline method. In other words sometimes we obtain only a modest gain by
incorporating more information into a kernel matrix.
Figure
6 and Figure 7 illustrate the results of experiments conducted on the two
Medline1033 queries. These results are averaged over 100 random runs of the
algorithm. For these experiments we start with a small number of dimensions. The
dimensionality was increased in intervals by extracting more features. The results
dimesnsion
baseline
Figure
2. Generalisation performance of SVM with GSK, LSK, and linear kernel for acq.
baseline
Figure
3. Generalisation performance of SVM with GSK, LSK and linear kernel for money-fx.
dimension
Figure
4. Generalisation performance of SVM with GSK, LSK and linear lkernel for grain.
dimension
baseline
Figure
5. Generalistaion performance of SVM with GSK, LSK and linear kernel for crude.
baseline
Figure
6. Generalisation performance of SVM with GSK, LSK and linear kernel for query23.
baseline
Figure
7. Generalisation performance of SVM with GSK, LSK and linear kernel for query20.
Table
2. F1 numbers for varying dimensions of
feature space for a SVM classifier with LSK and
SVM classifier with linear kernel (baseline) for
ten Reuters categories
category k baseline
100 200 300
money-fx 0.62 0.673 0.635 0.6
grain 0.664 0.661 0.67 0.727
crude 0.431 0.558 0.576 0.575
trade 0.568 0.683 0.66 0.657
interest 0.478 0.497 0.5 0.517
ship 0.422 0.544 0.565 0.565
wheat 0.514 0.51 0.556 0.624
micro-avg 0.786 0.815 0.815 0.819
for query23 are very encouraging showing that the LSK has a potential to show a
substantial improvement over the baseline method. Thus the results (Reuters and
show that in some cases there can be improvements in performance,
while for others there can be no significant improvements.
Our results on Reuters and Medline1033 datasets demonstrates that GSK is a
very effective approximation strategy for LSK. In most of the cases the results
are approximately the same as LSK. However it is worth noting that in some cases
such as Figure 6, GSK may show substantial improvement not only over the baseline
method but also over LSK.
Hence the results demonstrate that GSK is a good approximation strategy for
LSK. It can improve the generalisation performance over LSK as is evident from
the results on the Medline data. It can extract informative features that can be
very useful for classification. GSK can achieve a maximum at a high dimension in
some situations. This phenomenon may cause practical limitations for large data
sets. We have addressed this issue and developed a generalised GSK algorithm for
text classification.
Furthermore, we conducted another set of experiments to study the behaviour of
and SVM classifier with a semantic kernel and an SVM classifier with a linear kernel
in a scenario where a classifier is learnt using a small training set. We selected
randomly 5% of the training data (9603 documents). We focused on the top 10
categories (earn, 144), (acq, 85), (money-fx, 29), (grain, 18), (crude, 16), (trade,
28), (interest, 19), (ship, 12), (wheat, 8), (corn, 6). Note that the number of relevant
documents are shown with the name of the categories. A binary classifier was learnt
for each category and was evaluated on the full test set of (3299) documents. C
was tuned on one category.
F1 numbers obtained as a results of these experiments are reported in Table 2.
Micro-averaged F1 numbers are also given. We set the value of
It is to be noted that there is gain for some categories, but that there is loss in
performance for others. It is worth noting that an SVM classifier trained with
a semantic kernel can perform approximately the same as the baseline method
even with 200 dimensions. These results demonstrate that the proposed method
is capable of performing reasonably well in environments with very few labelled
documents.
6.2. Experiments on Non-text Data
Figure
8. Generalization error for polynomial kernels of degrees 2,3, 4 on Ionosphere data (aver-
aged over 100 random splits) as a function of the dimension of the feature space.
Now we present the experiments conducted on the non-text Ionosphere data set
from the UCI repository. Ionosphere contains 34 features and 315 points. We
measured the gain of the the LSK by comparing its performance with an SVM
with polynomial kernel.
The parameter C was set by conducting preliminary experiments on one split of the
data keeping the dimensionality of the space full. We tried
The optimal value that demonstrated minimum error was chosen. This value was
used for all splits and for the reduced feature space. Note that the split of the data
used for tuning the parameter C was not used for further experiments.
The results are shown in Figure 8. These results are averaged over 100 runs. We
begin experiments by setting k to a small value. We increased the dimensionality of
the space in intervals. The results show that test error was greatly reduced when the
dimension of the feature space was reduced. The curves also demonstrate that the
classification error of an SVM classifier with semantic kernel reaches a minimum.
It makes some peaks and valleys before showing results equivalent to the baseline
method. These results demonstrate that the proposed method is so general that
it can be applied to domains other than text. It has a potential to improve the
performance of a SVM classifier by reducing the dimension. However in some cases
it can show no gain and may not be successful in reducing the dimension.
7. A Generalised Version of GSK algorithm for Text Classification
In this section we present a generalised version of the GSK algorithm. This algorithm
arose as a result of experiments reported in Section 6. Some other preliminary
experiments also contributed to the development of the algorithm.
The GSK algorithm presented in the previous section extracts features relative to
the documents but irrespective of their relevance to the category. In other words,
features are not computed with respect to the label of a document. Generally the
category distribution is very skewed for text corpora. This establishes a need to
bias the feature computation towards the relevant documents. In other words, if we
can introduce some bias in this feature extraction process, the computed features
can be more useful and informative for text classification.
The main goal of developing the generalised version of the GSK algorithm is to
extract few but more informative features, so that when fed to a classifier it can
show high effectiveness in a low number of dimensions.
To achieve the goal described in the preceding paragraph we propose the algorithm
shown in Figure 9. GSK is an iterative procedure that greedily selects a document
at each iteration and extracts features. At each iteration the criterion for selecting
a document is the maximum residual norm. The generalised version of GSK
algorithm focuses on relevant documents by placing more weight on the norm of
relevant documents.
The algorithm transforms the documents into a new (reduced) feature space by
taking a set of documents. As input an underlying kernel function, number T and
bias B are also fed to the algorithm. The number T specifies the dimension of the
reduced feature space, while B gives the degree to which the feature extraction is
biased towards relevant documents.
The algorithm starts by measuring the norm of each document. It concentrates on
relevant documents by placing more weight on the norm of these documents. As a
next step a document with a maximum norm is chosen and features are extracted
relative to this document. This process is repeated T times. Finally the documents
are transformed into a new T dimensional space. The dimension of the new space
is much smaller than the original feature space. Note that when there is enough
positive data available for training, equal weights can be given both to relevant and
irrelevant documents.
The generalised version of the GSK algorithm provides a practical solution of the
problem that may occur with the GSK-algorithm. This algorithm may show good
Require: A kernel k, training set f(d 1 number T
to n do
end for
to T do
to n do
if (y i == +1) then
else
end for
to n do
end for
end for
return feat[i; j] as the j-th feature of input
To classify a new example d:
to T do
end for
return newfeat[j] as the j-th feature of the example d;
Figure
9. A Generalised Version of GSK Algorithm
generalisation at high dimension when there is not enough training data. In that
scenario the generalised version of the GSK-algorithm shows similar performance at
lower dimensions. The complete pseudo-code of the algorithm is given in Figure 9.
8. Experiments with Generalised GSK-algorithm
We employed the generalise the GSK algorithm to transform the Reuters documents
into a new reduced feature space. We evaluated the proposed method by
conducting experiments on the full Reuters data set. We used the ModeApte version
and performed experiments on 90 categories that contain at least one relevant
document both in the training set and test set. In order to transform documents
into a new space, two free parameters T (dimension of reduced space) and B (bias)
need to be tuned. We analysed the generalistion performance of an SVM classifier
with respect to B by conducting a set of experiments on 3 Reuters categories. The
results of these experiments are shown in Table 3. For this set of experiments we
set the dimensionality of space (T ) to 500 and varied B. The results demonstrate
that the extraction of features in a biased environment can be more informative
and useful when there is insufficient training data. On the basis of these experiments
we selected an optimal value of B for our next set of experiments. Note that
we selected the optimal value of C by conducting preliminary experiments on one
Reuters category.
We set the value of 1000. The results of this set of experiments are given
in
Table
4. We have given F1 value for 500, and 1000 dimensional space. Micro-averaged
F1 values are also shown in the table. In order to learn a SVM classifier
we used SV M light [8] for the experiments described in this section.
These results show that the generalised GSK algorithm can be viewed as a substantial
dimensionality reduction technique. Our observation is that the proposed
method shows results that are comparable to the baseline method at a dimensionality
of 500. Note that for the baseline method we employed an SVM with a linear
kernel. It is to be noted that after 500 dimensionality there is a slow improvement in
generalisation performance of the SVM. The micro-averaged F1 values for an SVM
with generalised GSK is 0.822 (at 500 dimensions), whereas the micro-averaged
F1 number for an SVM with linear kernel is 0.854. These results show that the
performance of the proposed technique is comparable to the baseline method.
These results show that the generalised GSK algorithm is a practical approximation
of LSK. If the learning algorithm is provided with enough positive training data,
there is no need to bias the feature extraction process. However, when the learning
algorithm does not have enough positive training data, an SVM may only show
good performance at high dimensionality leading to practical limitations. However
the introduction of bias towards relevant documents will overcome this problem,
hence making it a technique that can be applied to large data sets.
Table
3. F1 numbers for acq, money-
fx and wheat for different values of
B.
1.0 0.922 0.569 0.707
1.1 0.864 0.695 0.855
1.2 0.864 0.756 0.846
2.0 0.864 0.748 0.846
2.2 0.864 0.752 0.855
2.4 0.864 0.756 0.846
2.6 0.864 0.748 0.846
2.8 0.864 0.752 0.846
6.0 0.864 0.752 0.857
10.
Table
4. F1 numbers for top-ten Reuters categories
Category T baseline
500 1000
acq 0.923 0.934 0.948
money-fx 0.755 0.754 0.775
grain 0.894 0.902 0.93
crude 0.872 0.883 0.880
trade 0.733 0.763 0.761
interest 0.627 0.654 0.691
ship 0.743 0.747 0.797
wheat 0.864 0.851 0.87
corn 0.857 0.869 0.895
micro-avg
9. Conclusion
The paper has studied the problem of introducing semantic information into a
kernel based learning method. The technique was inspired by an approach known
as Latent Semantic Indexing borrowed from Information Retrieval. LSI projects the
data into a subspace determined by choosing the first singular vectors of a singular
value decomposition. We have shown that we can obtain the same inner products
as those derived from this projection by performing an equivalent projection onto
the first eigenvectors of the kernel matrix. Hence, it is possible to apply the same
technique to any kernel defined feature space whatever its original dimensionality.
We refer to the derived kernel as the Latent Semantic Kernel (LSK).
We have experimentally demonstrated the efficacy of the approach on both text and
non-text data. For some datasets substantial improvements in performance were
obtained using the method, while for others little or no effect was observed. As the
eigenvalue decomposition of a matrix is relatively expensive to compute, we have
also considered an iterative approximation method that is equivalent to projecting
onto the first dimension derived from a Gram-Schmidt othogonalisation of the data.
Again we can perform this projection efficiently in any kernel defined feature space
and experiments show that for some datasets the so-called Gram-Schmidt Kernel
(GSK) is more effective than the LSK method.
Despite this success, for large imbalanced datasets such as those encountered in text
classification tasks the number of dimensions required to obtain good performance
grows quite large before relevant features are drawn from the small number of
positive documents. This problem is addressed by biasing the GSK feature selection
procedure in favour of positive documents hence greatly reducing the number of
dimensions required to create an effective feature space.
The methods described in the paper all have a similar flavour and have all demonstrated
impressive performance on some datasets. The question of what it is about
a dataset that makes the different semantic focusing methods effective is not fully
understood and remains the subject of ongoing research.
Acknowledgements
The authors would like to thank Thorsten Joachims and Chris Watkins for useful
discussions. Our work was supported by EPSRC grant number GR/N08575 and
by the European Commission through the ESPRIT Working Group in Neural and
Computational Learning, NeuroCOLT2, Nr. 27150. KerMIT. and the 1st Project
'Kernel methods for images and text', KerMIT, Nr. 1st-2000-25431.
--R
Theoretical foundations of the potential function method in pattern recognition learning.
A training algorithm for optimal margin classifiers.
An Introduction to Support Vector Machines.
Indexing by latent semantic analysis.
Inductive learning algorithms and representations for text categorization.
Automatic cross-language retrieval using latent semantic indexing
Large margin rank boundaries for ordinal regression.
Making large-scale SVM learning practical
Approximate dimension equalization in vector-based information retrieval
Text categorization with support vector machines.
Five papers on wordnet.
Gaussian processes and SVM: Mean field and leave-one-out
Numerical recipes in C: the art of scientific computing.
A vector space model for information retrieval.
Ridge regression learning algorithm in dual variables.
Kernel PCA pattern reconstruction via approximate pre-images
Kernel principal component analysis.
SV estimation of a distri- bution's support
Structural risk minimization over data-dependent hierarchies
Margin distribution and soft margin.
Experiments in multilingual information retrieval using the spi-der system
Support vectors machines based on a semantic kernel for text categorization.
A tutorial on support vector regression.
Sparse kernel feature analysis.
Statistical Learning Theory.
Generalized vector space model in information retrieval.
--TR
A training algorithm for optimal margin classifiers
The nature of statistical learning theory
Experiments in multilingual information retrieval using the SPIDER system
Generalized vector spaces model in information retrieval
Inductive learning algorithms and representations for text categorization
Making large-scale support vector machine learning practical
Kernel principal component analysis
An introduction to support Vector Machines
A vector space model for automatic indexing
Text Categorization with Support Vector Machines. How to Represent Texts in Input Space?
Text Categorization with Suport Vector Machines
Ridge Regression Learning Algorithm in Dual Variables
Approximate Dimension Equalization in Vector-based Information Retrieval
Support Vector Machines Based on a Semantic Kernel for Text Categorization
--CTR
Qiang Sun , Gerald DeJong, Explanation-Augmented SVM: an approach to incorporating domain knowledge into SVM learning, Proceedings of the 22nd international conference on Machine learning, p.864-871, August 07-11, 2005, Bonn, Germany
Yaoyong Li , John Shawe-Taylor, Using KCCA for Japanese---English cross-language information retrieval and document classification, Journal of Intelligent Information Systems, v.27 n.2, p.117-133, September 2006
Yaoyong Li , John Shawe-Taylor, Advanced learning algorithms for cross-language patent retrieval and classification, Information Processing and Management: an International Journal, v.43 n.5, p.1183-1199, September, 2007
Yonghong Tian , Tiejun Huang , Wen Gao, Latent linkage semantic kernels for collective classification of link data, Journal of Intelligent Information Systems, v.26 n.3, p.269-301, May 2006
Mehran Sahami , Timothy D. Heilman, A web-based kernel function for measuring the similarity of short text snippets, Proceedings of the 15th international conference on World Wide Web, May 23-26, 2006, Edinburgh, Scotland
Haixian Wang , Zilan Hu , Yu'e Zhao, An efficient algorithm for generalized discriminant analysis using incomplete Cholesky decomposition, Pattern Recognition Letters, v.28 n.2, p.254-259, January, 2007
Kevyn Collins-Thompson , Jamie Callan, Query expansion using random walk models, Proceedings of the 14th ACM international conference on Information and knowledge management, October 31-November 05, 2005, Bremen, Germany
Serhiy Kosinov , Stephane Marchand-Maillet , Igor Kozintsev , Carole Dulong , Thierry Pun, Dual diffusion model of spreading activation for content-based image retrieval, Proceedings of the 8th ACM international workshop on Multimedia information retrieval, October 26-27, 2006, Santa Barbara, California, USA
Kristen Grauman , Trevor Darrell, The Pyramid Match Kernel: Efficient Learning with Sets of Features, The Journal of Machine Learning Research, 8, p.725-760, 5/1/2007
Vikramjit Mitra , Chia-Jiu Wang , Satarupa Banerjee, Text classification: A least square support vector machine approach, Applied Soft Computing, v.7 n.3, p.908-914, June, 2007
Francis R. Bach , Michael I. Jordan, Kernel independent component analysis, The Journal of Machine Learning Research, 3, p.1-48, 3/1/2003 | gram-schmidt kernels;text categorization;latent semantic indexing;kernel methods;latent semantic kernels |
607623 | Hidden Markov Models for Text Categorization in Multi-Page Documents. | In the traditional setting, text categorization is formulated as a concept learning problem where each instance is a single isolated document. However, this perspective is not appropriate in the case of many digital libraries that offer as contents scanned and optically read books or magazines. In this paper, we propose a more general formulation of text categorization, allowing documents to be organized as sequences of pages. We introduce a novel hybrid system specifically designed for multi-page text documents. The architecture relies on hidden Markov models whose emissions are bag-of-words resulting from a multinomial word event model, as in the generative portion of the Naive Bayes classifier. The rationale behind our proposal is that taking into account contextual information provided by the whole page sequence can help disambiguation and improves single page classification accuracy. Our results on two datasets of scanned journals from the Making of America collection confirm the importance of using whole page sequences. The empirical evaluation indicates that the error rate (as obtained by running the Naive Bayes classifier on isolated pages) can be significantly reduced if contextual information is incorporated. | Figure
1. Bayesian network for the Naive Bayes classifier.
2.2. Hidden Markov models
HMMs have been introduced several years ago as a tool for probabilistic sequence modeling.
The interest in this area developed particularly in the Seventies, within the speech recognition
research community (Rabiner, 1989). During the last years a large number of variants
and improvements over the standard HMM have been proposed and applied. Undoubt-
edly, Markovian models are now regarded as one of the most significant state-of-the-art
approaches for sequence learning. Besides several applications in pattern recognition and
molecular biology, HMMs have been also applied to text related tasks, including natural
language modeling (Charniak, 1993) and, more recently, information retrieval and extraction
(Freitag and McCallum, 2000; McCallum et al., 2000). The recent view of the HMM as
a particular case of Bayesian networks (Bengio and Frasconi, 1995; Lucke, 1995; Smyth et
al., 1997) has helped their theoretical understanding and the ability to conceive extensions
to the standard model in a sound and formally elegant framework.
An HMM describes two related discrete-time stochastic processes. The first process pertains
to hidden discrete state variables, denoted Xt , forming a first-order Markov chain
and taking realizations on a finite set {x1,.,xN }. The second process pertains to observed
variables or emissions, denoted Dt . Starting from a given state at time 0 (or given
an initial state distribution P(X0)) the model probabilistically transitions to a new state
X1 and correspondingly emits observation D1. The process is repeated recursively until
an end state is reached. Note that, as this form of computation may suggest, HMMs are
closely related to stochastic regular grammars (Charniak, 1993). The Markov property
prescribes that Xt+1 is conditionally independent of X1,.,Xt1 given Xt . Furthermore,
it is assumed that Dt is independent of the rest given Xt . These two conditional
independence assumptions are graphically depicted using the Bayesian network of
figure 2. As a result, an HMM is fully specified by the following conditional probability
P(Xt | Xt1) (transition distribution)
(2)
P(Dt | Xt ) (emission distribution)
Since the process is stationary, the transition distribution can be represented as a square
stochastic matrix whose entries are the transition probabilities
abbreviated as P(xi | x j ) in the following. In the classic literature, emissions are restricted
HIDDEN MARKOV MODELS 199
Figure
2. Bayesian networks for standard HMMs.
to be symbols in a finite alphabet or multivariate continuous variables (Rabiner, 1989). As
explained in the next section, our model allows emissions to be bag-of-words.
3. The multi-page classifier
We now turn to the description of our classifier for multi-page documents. In this case the
categorization task consists of learning from examples a function that maps the whole document
sequence d1,.,dT into a corresponding sequence of page categories, c1,.,cT .
This section presents the architecture and the asociated algorithms for grammar extraction,
training, and classification.
3.1. Architecture
The system is based on an HMM whose emissions are associated with entire pages of the
document. Thus, the realizations of the observation Dt are bag-of-words representing the
text in the t-th page of the document. HMM states are related to pages categories by a
deterministic function that maps state realizations into page categories. We assume that
is a surjection but not a bijection, i.e. that there are more state realizations than categories.
This enriches the expressive power of the model, allowing different transition behaviors for
pages of the same class, depending on where the page is actually encountered within the
sequence. However, if the page contents depends on the category but not on the context of
the category within the sequence,5 multiple states may introduce too many parameters and
it may be convenient to assume that
P(Dt | xi
This constrains emission parameters to be the same for a given page category, a form of
parameters sharing that may help to reduce overfitting. The emission distribution is modeled
by assuming conditional word independence given the class, like in Eq. (1):
|dt |
P(dt | ck . (4)
Therefore, the architecture can be graphically described as the merging of the Bayesian
networks for HMMs and Naive Bayes, as shown in figure 3. We remark that the state (and
200 FRASCONI ET AL.
Figure
3. Bayesian network describing the architecture of the sequential classifier.
hence the category) at page t depends not only on the contents of that page, but also on
the contents of all other pages in the document, summarized into the HMM states. This
probabilistic dependency implements the mechanism for taking contextual information into
account.
The algorithms used in this paper are derived from the literature on Markov models
(Rabiner, 1989), inference and learning in Bayesian networks (Pearl, 1988; Heckerman,
1997; Jensen, 1996) and classification with Naive Bayes (Lewis and Gale, 1994; Kalt,
1996). In the following we give details about the integration of all these methods.
3.2. Induction of HMM topology
The structure or topology of an HMM is a representation of the allowable transitions
between hidden states. More precisely, the topology is described by a directed graph whose
vertices are state realizations {x1,.,xN }, and whose edges are the pairs
An HMM is said to be ergodic if its transition graph is fully-connected.
However, in almost all interesting application domains, less connected structures are better
suited for capturing the observed properties of the sequences being modeled, since they
convey domain prior knowledge. Thus, starting from the right structure is an important
problem in practical Hidden Markov modeling. As an example, consider figure 4, showing
a (very simplified) graph that describes transitions between the parts of a hypothetical set
of books. Possible state realizations are {start, title, dedication, preface, toc, regular, index,
end} (note that in this simplified example is a one-to-one mapping).
Figure
4. Example of HMM transition graph.
HIDDEN MARKOV MODELS 201
While a structure of this kind could be hand-crafted by a domain expert, it may be more
advantageous to learn it automatically from data. We now briefly describe the solution
adopted to automatically infer HMM transition graphs from sample multi-page documents.
Let us assume that all the pages of the available training documents are labeled with the
class they belong to. One can then imagine to take advantage of the observed labels to
search for an effective structure in the space of HMMs topologies. Our approach is based
on the application of an algorithm for data-driven model induction adapted from previous
works on construction of HMMs of text phrases for information extraction (McCallum et al.,
2000). The algorithm starts by building a structure that can only explain the available
training sequences (a maximally specific model). This initial structure has as many paths
(from the initial to the final state) as there are training sequences. Every path is associated
with one sequence of pages, i.e. a distinct state is created for every page in the training set.
Each state x is labeled by (x), the category of the corresponding page in the document.
Note that, unlike the example shown in figure 4, several states are generated for the same
category. The algorithm then iteratively applies merging heuristics that collapse states so
as to augment generalization capabilities over unseen sequences. The first heuristic, called
neighbor-merging, collapses two states x and x if they are neighbors in the graph and
(x). The second heuristic, called V-merging, collapses two states x and x if
they share a transition from or to a common state, thus reducing the
branching factor of the structure.
3.3. Inference and learning
Given the HMM topology extracted by the algorithm described above, the learning problem
consists of determining transition and emission parameters. One important distinction that
needs to be made when training Bayesian networks is whether or not all the variables are
observed. Assuming complete data (all variables observed), maximum likelihood estimation
of the parameters could be solved using a one-step algorithm that collects sufficient statistics
for each parameter (Heckerman, 1997). In our case, data are complete if and only if the
following two conditions are met:
1. there is a one-to-one mapping between HMM states and page categories (i.e.
and for
2. the category is known for each page in the training documents, i.e. the dataset consists
of sequences of pairs ({d1, c1},.,{dT , cT }), being ct the (known) category of page t
and being T the number of pages in the document.
Under these assumptions, estimation of transition parameters is straightforward and can be
accomplished as follows:
where N(ci, cj ) is the number of times a page of class ci follows a page of class cj in the
trainingset.Similarly,estimationofemissionparametersinthiscasewouldbeaccomplished
exactly like in the case of the Naive Bayes classifier (see, e.g. Mitchell (1997)):
P(w |
where N(w, ck) is the number of occurrences of word w in pages of class ck and |V | is the
vocabulary size (1/|V | corresponds to a Dirichlet prior over the parameters (Heckerman,
1997) and plays a regularization role for whose words which are very rare within a class).
Conditions 1 and 2 above, however, are normally not satisfied. First, in order to model
more accurately different contexts in which a category may occur, it may be convenient to
have multiple distinct HMM states for the same page category. This implies that page labels
do not determine a unique state path. Second, labeling pages in the training set is a time
consuming process that needs to be performed by hand and it may be important to use also
unlabeled documents for training (Joachims, 1999; Nigam et al., 2000). This means that
label c may be not available for some t. If assumption 2 is satisfied but assumption 1 is not,
we can derive the following approximated estimation formula for transition parameters:
where N(xi, x j )counts how many times state xi follows x j during the state merge procedure
described in Section 3.2. However, in general, the presence of hidden variables requires an
iterative maximum likelihood estimation algorithm, such as gradient ascent or expectation-maximization
(EM). Our implementation uses the EM algorithm, originally formulated
in Dempster et al. (1977) and usable for any Bayesian network with local conditional
probability distributions belonging to the exponential family (Heckerman, 1997). Here the
EM algorithm essentially reduces to the Baum-Welch form (Rabiner, 1989) with the only
modification that some evidence is entered into state variables. Since multiple states are
associated with a category and even for labeled documents only the page category is known,
state evidence takes the form of findings (Jensen, 1996). State evidence is taken into account
in the E-step by changing forward propagation as follows:
ct
where
.
is the forward variable in the Baum-Welch algorithm. The emission probability P(dt
is obtained from Eq. (4), using ck
TheM-stepisperformedinthestandardwayfortransitionparameters,byreplacingcounts
in Eq. (5) with their expectations given all the observed variables. Emission probabilities
HIDDEN MARKOV MODELS 203
are also estimated using expected word counts. If parameters are shared as indicated in
Eq. (3), these counts should be summed over states having the same label. Thus, in the case
of incomplete data, Eq. (6) is replaced by
P(w | ck)
where S is the number of training sequences, N(w, ck) is the number of occurrences of
word w in pages of class ck and P(Xt = xi | d1,.,dT ) is the probability of being in state
xi at page t given the observed sequence of pages d1 .dT . Readers familiar with HMMs
should recognize that the latter quantity can be computed by the Baum-Welch procedure
during the E-step. The sum on p extends over training sequences, while the sum on t extends
over pages of the p-th document in the training set. The E- and M-steps are iterated until a
local maximum of the (incomplete) data likelihood is reached.
Note that if page categories are observed, it is convenient to use the estimates computed
with Eq. (7) as a starting point, rather than using random initial parameters. Similarly, an
initial estimate of the emission parameters can be obtained from Eq. (6).
It is interesting to point out a related application of the EM algorithm for learning from
labeled and unlabeled documents (Nigam et al., 2000). In that paper, the only concern was
to allow the learner to take advantage of unlabeled documents in the training set. As a major
difference, the method in Nigam et al. (2000) assumes flat single-page documents and, if
applied to multi-page documents, would be equivalent to a zero-order Markov model that
cannot take contextual information into account.
3.4. Page classification
Given a document of T pages, classification is performed by first computing the sequence
of states x1,.,xT that was most likely to have generated the observed sequence of pages,
and then mapping each state to the corresponding category (xt ). The most likely state
sequence can be obtained by running an adapted version of Viterbi's algorithm, whose
more general form is the max-propagation algorithm for Bayesian networks described in
Jensen (1996). Briefly, the following quantity
is computed using the following recursion:
204 FRASCONI ET AL.
The optimal state sequence is then retrieved by backtracking:
Finally, categories are obtained as ct = (xt ). By contrast, note that the Naive Bayes clas-
sifier would compute the most likely categories as
ct
Comparing Eqs. (11)-(15) to Eq. (16) we see that both classifiers rely on the same emission
model P(dt | cj ) but while Naive Bayes employs the prior class probability to compute its
final prediction, the HMM classifier takes advantage of a dynamic term (in square brackets
in Eq. (12)) that incorporates grammatical constraints.
4. Experimental results
In this section, we describe a set of experiments that give empirical evidence of the effectiveness
of the proposed model. The main purpose of our experiments was to make a
comparison between our multi-page classification approach and a traditional isolated page
classification system, like the well known Naive Bayes text classifier. The evaluation has
been conducted over real-world documents that are naturally organized in the form of page
sequences. We used two different datasets associated with two journals in the Making of
America (MOA) collection. MOA is a joined project between the University of Michigan
and Cornell University (see http:/moa.umdl.umich.edu/about.html and Shaw and
Blumson (1997)) for collecting and making available digitized information about history
and evolution processes of the American society between the XIX and the XX century.
4.1. Datasets
The first dataset is a subset of the journal American Missionary, a sociological magazine
with strong Christian guidelines. The task consists of correctly classifying pages of previously
unseen documents into one of the ten categories described in Table 1. Most of these
categories are related to the topic of the articles, but some are related to the parts of the
journal (i.e. Contents, Receipts, and Advertisements). The dataset we selected contains 95
issues from 1884 to 1893, for a total of 3222 OCR text pages. Special issues and final report
issues (typically November and December issues) have been removed from the dataset as
they contain categories not found in the rest. The ten categories are temporally stable over
the
The second dataset is a subset of Scribners Monthly, a recreational and cultural magazine
printed in the second half of the XIX century. Table 2 describes the categories we have
selected for this classification task. The filtered dataset contains a total of 6035 OCR text
pages, organized into issues ranging from year 1870 to 1875. Although spanning a shorter
temporal interval, the number of pages in this second dataset is larger than in the first one
because issues are about 3-4 times longer.
HIDDEN MARKOV MODELS 205
Table
1. Categories in the American Missionary domain.
Name Description
Cover and index of surveys
Editorial articles
Afro-Americans' surveys
American Indians' surveys
Reports from China missions
Articles about female condition
Education and childhood
Magazine information
Lists of founders
Contents is mostly graphic, with little text description
Table
2. Categories in the Scribners Monthly domain.
Name Description
1. Article
2. Books and Authors at Home and Abroad
Generic articles
Book reviews
Broad cultural news
Poems or tales
Articles on home living
Scientific articles
Articles on fine arts
News reports
Category labels for the two datasets were obtained semi-automatically, starting from the
MOAXMLfilessuppliedwiththedocumentscollections.Theassignedcategorieswerethen
manually checked. In the case of a page containing the end and the beginning of two articles
belonging to different categories, the page was assigned the category of the ending article.
Each page within a document is represented as a bag-of-words, counting the number of
word occurrences within the page. It is worth remarking that in both datasets, instances
are text documents output by an OCR system. Imperfections of the recognition algorithm
and the presence of images in some pages yields noisy text, containing misspelled or
nonexistent words, and trash characters (see Bicknese (1998) for a report of OCR accuracy
in the MOA digital library). Although these errors may negatively affect the learning process
and subsequent results in the evaluation phase, we made no attempts to correct and filter out
misspelledwords,exceptforthefeatureselectionprocessdescribedinSection4.3.However,
since OCR extracted documents preserve the text layout found in the original image, it was
necessary to rejoin word fragments that had been hyphenated due to line breaking.
206 FRASCONI ET AL.
4.2. Grammar induction
In the case of completely labeled documents, it is possible to run the structure learning
algorithm presented in Section 3.2. In figure 5 we show an example of induced HMM
topology for the journal The American Missionary. This structure was extracted using 10
issues (year 1884) as a training set. Each vertex in the transition graph is associated with one
HMM state and is labeled with the corresponding category index (see Table 1). Edges are
labeled with the transition probability from source to target state, estimated in this case by
counting state transitions during the state merging procedure (see Eq. (7)). These values are
also used as initial estimates of P(xi |x j )and subsequently refined by the EM algorithm. The
associated stochastic grammar implies that valid sequences must start with the index page
(class 1), followed by a page of general communications (class 8). Next state is associated
with a page of an editorial article (2). Self transition here has a value of 0.91, meaning that
with high probability the next page will belong to the editorial too. With lower probability
(0.07) next page is one of The South survey (3) or (probability 0.008) The Indians (4)
or Bureau of Women's work (6).
In figure 6 we show one example of induced HMM topology for journal Scribners
Monthly,obtainedfrom12trainingissues(year1871).AlthoughissuesofScribnersMonthly
are longer and the number of categories is comparable to those in the American Missionary,
the extracted transition diagram in figure 6 is simpler than the one in figure 5. This reflects
less variability in the sequential organization of articles in Scribners Monthly. Note that
category 7 (Home and Society) is rare and never occurs in 1871.
4.3. Feature selection
Text pages were first preprocessed with common filtering algorithms including stemming
and stop words removal. Still, the bag-of-words representation of pages leads to a very
high-dimensional feature space that can be responsible of overfitting in conjunction to algorithms
based on generative probabilistic models. Feature selection is a technique for
limiting overfitting by removing non-informative words from documents. In our experi-
ments, we performed feature selection using information gain (Yang and Pedersen, 1997).
This criterion is often employed in different machine learning contexts. It measures the
average number of bits of information about the category that are gained by including a
word in a document. For each dictionary term w, the gain is defined as
K K
P(ck | w)log P(ck | w)
where w denotes the absence of word w. Feature selection is performed by retaining only the
words having the highest average mutual information with the class variable. OCR errors,
however, can produce very noisy features which may be responsible of poor performance
HIDDEN MARKOV MODELS 207
Figure
5. Data induced HMM topology for American Missionary, year 1884. Numbers in each node correspond
208 FRASCONI ET AL.
Figure
6. Data induced HMM topology for Scribners Monthly, year 1871. Numbers in each node correspond to
HIDDEN MARKOV MODELS 209
even if feature selection is performed. For this reason, it may be convenient to prune from
the dictionary (before applying the information gain criterion) all the words occurring less
than a given threshold h in the training set. Preliminary experiments showed that best
performances are achieved by pruning words having less than occurrences.
4.4. Accuracy comparisons
In the following we compare isolated page classification (using standard Naive Bayes) to
sequential classification (using the proposed HMM architecture). Although classification
accuracy could be estimated by fixing a split of the available data into a training and a
test set, here we suggest a method that attempts to incorporate some peculiarities of digital
libraries domain. In particular, hand-labeling of documents for the purpose of training is
a very expensive activity and working with large training sets is likely to be unrealistic
in practical applications. For this reason, in most experiments we deliberately used small
fractions of the available data for training.
Moreover, there is a problem of temporal stability as the journal organization may change
over time. In our test we attempted to address this aspect by assuming that training data is
available for a given year and we decided to test generalization over journal issues published
in different years. Splitting according to publication year can be an advantage for the training
algorithm since it increases the likelihood that different issue organizations are represented
in the training set.
The resulting method is related to k-fold cross-validation, a common approach for accuracy
estimation that partitions the dataset into k subsets and iteratively use one subset for
testing and the other k 1 for training. In our experiments we reversed the proportions of
data in the training and test sets, using all the journal issues in one year for training, and
the remaining issues for testing. We believe that this setting is more realistic in the case of
digital libraries.
In the following experiments, the HMM classifiers were trained by first extracting the
transition structure, then initializing the parameters using Eqs. (6) and (7), and finally tuning
the parameters using the EM algorithm. We found that the initial parameter estimates are
very close to the final solution found by the EM algorithm. Typically, 2 or 3 iterations are
sufficient for EM to converge.
4.4.1. American Missionary dataset. The results of the ten resulting experiments are
shown in figure 7. The hybrid HMM classifier (performing sequential classification) consistently
outperforms the plain Naive Bayes classifier working on isolated pages. The graph
on the top summarizes results obtained without feature selection. Averaging the results over
all the ten experiments, NB achieves 61.9% accuracy, while the HMM achieves 80.4%. This
corresponds to a 48.4% error rate reduction. The graph on the bottom refers to results obtained
by selecting the best 300 words according to the information gain criterion. The
average accuracy in this case is 69.8% for NB and 80.6% for the HMM (a 35.7% error
rate reduction). In both cases, words occurring less than 10 times in their training sets were
pruned. When using feature selection, NB improves while the HMM performance is essentially
the same. Moreover, the standard deviation of the accuracy is smaller for NB (2.8%,
compared to 4.2% for the HMM). The larger variability in the case of the HMM is due to
Figure
7. Isolated vs. sequential page classification on the American Missionary dataset. For each column,
classifiers are trained on documents of the corresponding year and tested on all remaining issues.
the structure induction algorithm. In facts, the sequential organization of journal issues is
temporally less stable than article contents.
4.4.2. Scribners Monthly dataset. Similar experiments have been carried out on the Scrib-
ners Monthly journal. Results using no feature selection are shown on the top of figure 8. The
average accuracy is 81.0%, for isolated page classification and 89.6% for sequential classi-
fication (the error reduction is 42.5%). After feature selection, the average accuracy drops
to 75.3% for the isolated page classifier, while it remains similar for the sequential classifier.
HIDDEN MARKOV MODELS 211
Figure
8. Isolated vs. sequential page classification on the Scribners Monthly dataset.
Noticeably, feature selection has different effects on the two datasets when coupled with
the Naive Bayes classifier: it tends to improve accuracy for the American Missionary and
tends to worsen for the Scribners Monthly. On the other hand, the HMM is almost insensitive
to feature selection, in both datasets. This is apparently counterintuitive since the emission
model is almost the same for the two classifiers (except for the EM tuning of emission
parameters in the case of the HMM). However, it should be remarked that the Naive Bayes'
final prediction is biased by the class prior (Eq. (16)) while the HMM's prediction is biased
by extracted grammar (Eqs. (11)-(15)). The latter provides more robust information that
effectively compensates for the crude approximation in the emission model, prescribing
212 FRASCONI ET AL.
conditional word independence. This robustness also affects positively performance if a
suboptimal set of features is selected for representing document pages.
4.5. Learning using ergodic HMMs
The following experiments provide a basis for evaluating the effects of the structure learning
algorithm presented in Section 3.2. In the present setting, we trained an ergodic HMM with
ten states (each state mapped to exactly one class). Emission parameters were initialized
using Eq. (6) while transition probabilities were initialized with random values. In this case
the EM algorithm takes the full responsibility for extracting sequential structure from data.
After training, arcs with associated probability less than 0.001 were pruned away.
The evaluation was performed using the American Missionary dataset, training on single
years as in the previous set of experiments. As expected (see figure 9), results are worse
than those obtained in conjunction with the grammar extraction algorithm. However, the
trained HMM outperforms the Naive Bayes classifier also in this case.
4.6. Effects of the training set size
To investigate the effects of the size of the training set we propose a set of experiments
alternative to those reported in Section 4.4. In these experiments we selected a variable
number of sequences (journal issues) n for training (randomly chosen in the dataset) and
tested generalization on all the remaining sequences. The accuracy is then reported as a
function of n, after averaging over 20 trials (each trial with the same proportion of training
and test sequences). All these experiments were performed on the American Missionary
dataset. As shown in figure 10, generalization for both the isolated and the sequential
classifier tends to saturate after about 15 sequences in the training set. This is slightly more
Figure
9. Comparison between the ergodic HMM and the HMM based on the extracted grammar.
HIDDEN MARKOV MODELS 213
Figure
10. Learning curve for the sequential and the isolated classifiers.
than the average number of issues in a single year. The sequential classifier consistently
outperforms the isolated page classifier.
4.7. Learning with partially labeled documents
Since labeling is an expensive human activity, we evaluated our system also when only
a fraction of the training documents pages are labeled. In particular, we are interested in
measuring the loss of accuracy due to missing page labels. Since structure learning is not
feasible with partially labeled documents, we used in this case an ergodic (fully connected)
HMM with ten states (one per class).
We have performed six different experiments on the American Missionary dataset, using
different percentages of labeled pages. In all the experiments, all issues of year 1884 form
the training set and the remaining issues form the test set. Table 3 shows detailed results of
the experiment. Classification accuracy is reported for single classes and for the entire test
set. Using 30% of labeled pages the HMM fails to learn a reliable transition structure and the
Naive Bayes classifier (trained with EM as in Nigam et al. (2000)) obtains higher accuracy
Table
4). However, with higher percentages of known page labels the comparison favors
again the sequential classifier. Using only 50% of labeled pages, the HMM outperforms the
isolated page classifier that was trained on completely labeled data. With greater percentages
of labeled documents, performances begin to saturate reaching a maximum of 80.24% when
all the labels are known (this corresponds to the result obtained in Section 4.5).
5. Conclusions
We have presented a text categorization system for multi-page documents which is capable
of effectively taking into account contextual information to improve accuracy with
respect to traditional isolated page classifiers. Our method can smoothly deal with unlabeled
pages within a document, although we have found that learning the HMM structure further
214 FRASCONI ET AL.
Table
3. Results achieved by the model trained by Expectation-Maximization, varying percentage of labeled
documents.
Percentage of labeled documents
Category
Another aspect is the granularity of document structure being exploited. Working at
the level of pages is straightforward since page boundaries are readily available. However,
actual category boundaries may not coincide with page boundaries. Some pages may contain
portions of text belonging to different articles (in this case, the page would belong to
multiple categories). Although this is not very critical for single-column journals such as
the American Missionary, the case of documents typeset in two or three columns certainly
deservesattention.Afurtherdirectionofinvestigationisthereforerelatedtothedevelopment
of algorithms capable of performing automatic segmentation of a continuous stream of text,
without necessarily relying on page boundaries.
Finally, text categorization methods that take document structure into account may be extremely
useful for other types of documents natively available in electronic form, including
web pages and documents produced with other typesetting systems. In particular, hypertexts
(like most documents in the Internet) are organized as directed graphs, a structure that can be
seen as a generalization of sequences. However, devising a classifier that can capture context
inhypertextsbyextendingthearchitecturedescribedinthispaperisstillanopenproblem:al-
though the extension of HMMs from sequences to trees is straightforward (see e.g. Diligenti
etal.(2001)),thegeneralcaseofdirectedgraphsisdifficultbecauseofthepresenceofcycles.
Preliminary research in this direction (based on simplified models incorporating graphical
transition structure) is presented in Diligenti et al. (2000) and Passerini et al. (2001).
Acknowledgments
We thank the Cornell University Library for providing us data collected within the Making
of America project. This research was partially supported by EC grant # IST-1999-20021
under METAe project.
Notes
1. A related formulation would consist of assigning a global category to a whole multi-page document, but this
formulation is not considered in this paper.
2. After observing the text.
3. A Bayesian network is an annotated graph in which nodes represent random variables and missing edges
encode conditional independence statements amongst these variables. Given a particular state of knowledge,
the semantics of belief networks determine whether collecting evidence about a set of variables does modify
one's belief about some other set of variables (Jensen, 1996; Pearl, 1988).
4. We adopt the standard convention of denoting variables by uppercase letters and realizations by the corresponding
lowercase letters. Moreover, we use the table notation for probabilities as in Jensen (1996); for example
P(X) is a shorthand for the table denotes the two-dimensional
table with entries
5. Of course this does not mean that the category is independent of the context.
--R
An Input Output HMM Architecture.
Bayesian Networks dor Data Mining.
An Introduction to Bayesian Networks.
Text Categorization with Support Vector Machines: Learning with Many Relevant Fea- tures
Transductive Inference for Text Classification using Support Vector Machines.
An Experimental Evaluation of OCR Text Representations for Learning Document Classifiers.
A New Probabilistic Model of Text Classification and Retrieval.
Hierarchically Classifying Documents using Very Few Words.
A Sequential Algorithm for Training Text Classifiers.
Comparison of Two Learning Algorithms for Text Categorization.
Bayesian Belief Networks as a Tool for Stochastic Parsing.
Automating the Construction of Internet Portals with Machine Learning.
Machine Learning.
Feature Selection
Text Classification from Labeled and Unlabeled Documents using EM.
Evaluation Methods for Focused Crawling.
Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.
A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition.
Online Searching and Page Presentation at the University of Michigan.
Probabilistic Independence Networks for Hidden Markov Probability Models.
Hidden Markov Model Induction by Bayesian Model Merging.
An Example-Based Mapping Method for Text Classification and Retrieval
A Comparative Study on Feature Selection in Text Categorization.
--TR
Probabilistic reasoning in intelligent systems: networks of plausible inference
An example-based mapping method for text categorization and retrieval
A sequential algorithm for training text classifiers
Bayesian Belief Networks as a tool for stochastic parsing
Probabilistic independence networks for hidden Markov probability models
Feature selection, perception learning, and a usability case study for text categorization
Text Classification from Labeled and Unlabeled Documents using EM
Statistical Language Learning
Machine Learning
Introduction to Bayesian Networks
Bayesian Networks for Data Mining
Automating the Construction of Internet Portals with Machine Learning
Text Categorization with Suport Vector Machines
Hierarchically Classifying Documents Using Very Few Words
A Comparative Study on Feature Selection in Text Categorization
Transductive Inference for Text Classification using Support Vector Machines
Hidden Markov Model} Induction by Bayesian Model Merging
Focused Crawling Using Context Graphs
Image Document Categorization Using Hidden Tree Markov Models and Structured Representations
Information Extraction with HMM Structures Learned by Stochastic Optimization
Evaluation Methods for Focused Crawling
A New Probabilistic Model of Text Classification and Retrieval TITLE2:
--CTR
Fabrizio Sebastiani, Machine learning in automated text categorization, ACM Computing Surveys (CSUR), v.34 n.1, p.1-47, March 2002 | text categorization;hidden Markov models;multi-page documents;naive bayes;digital libraries |
607624 | A Study of Approaches to Hypertext Categorization. | Hypertext poses new research challenges for text classification. Hyperlinks, HTML tags, category labels distributed over linked documents, and meta data extracted from related Web sites all provide rich information for classifying hypertext documents. How to appropriately represent that information and automatically learn statistical patterns for solving hypertext classification problems is an open question. This paper seeks a principled approach to providing the answers. Specifically, we define five hypertext regularities which may (or may not) hold in a particular application domain, and whose presence (or absence) may significantly influence the optimal design of a classifier. Using three hypertext datasets and three well-known learning algorithms (Naive Bayes, Nearest Neighbor, and First Order Inductive Learner), we examine these regularities in different domains, and compare alternative ways to exploit them. Our results show that the identification of hypertext regularities in the data and the selection of appropriate representations for hypertext in particular domains are crucial, but seldom obvious, in real-world problems. We find that adding the words in the linked neighborhood to the page having those links (both inlinks and outlinks) were helpful for all our classifiers on one data set, but more harmful than helpful for two out of the three classifiers on the remaining datasets. We also observed that extracting meta data from related Web sites was extremely useful for improving classification accuracy in some of those domains. Finally, the relative performance of the classifiers being tested provided insights into their strengths and limitations for solving classification problems involving diverse and often noisy Web pages. | Introduction
As the size of the Web expands rapidly, the need for good automated hypertext
classication techniques is becoming more apparent. The Web contains over two
billion pages connected by hyperlinks, making the task of locating specic information
on the Web increasingly di-cult. A recent user study [2] showed that users
often prefer navigating through directories of pre-classied content, and that providing
a category-based view of retrieved documents enables them to nd more
relevant information in a shorter time. The common use of category hierarchies for
navigation support in Yahoo! and other major Web portals has also demonstrated
the practical utility of hypertext categorization.
Automated hypertext classication poses new research challenges because of the
rich information in a hypertext document and the connectivity among documents.
Hyperlinks, HTML tags, category distributions over a linked neighborhood, and
meta data extracted from related Web sites all provide rich information for hyper-text
classication, which is not normally available in traditional text classication.
Researchers have only recently begun to explore the issues of exploiting rich hyper-text
information for automated classication.
Chakrabarti et al. [1] studied the use of citations in the classication of IBM
patents where the citations between documents (patents) were considered as \hy-
perlinks", and the categories were dened in a topical hierarchy. Similar experiments
on a small set of Web pages (only 900 pages from Yahoo!) with real hyperlinks
were also conducted. By using the system-predicted category labels for the linked
neighbors of a test document to reinforce the category decision(s) on that docu-
ment, they obtained a 31% error reduction, compared to the baseline performance
when using the local text in the document alone. They also tested a more naive
way of using the linked documents, treating the words in the linked documents as
if they were local. This approach increased the error rate of their system by 6%
over the baseline performance.
Oh et al. [18] reported similar observations on a collection of online Korean encyclopedia
articles. By using the system-predicted categories of the linked neighbors
of a test document to reinforce the classication decision(s) on that document,
they obtained a 13% improvement in F 1 (dened in Section 4.2) over the base-line
performance (when using local text only). On the other hand, when treating
words in the linked neighborhood of a document as if they were local words in that
document, the performance of their classier (Naive Bayes) decreased by 24% in
micro-averaged F 1 . Instead of using all the links from a document, they decided to
use only a subset of the linked documents based on the cosine similarity between
the \bags of words" of pairwise linked documents { the links with low similarity
scores were ignored. This ltering process yielded a 7% improvement in F 1 over
naively using all the links.
Furnkranz [10] used a set of Web pages from the WebKB University corpus (Sec-
tion to study the use of anchor text (words on a link) and the words \near" the
anchor text in a Web page to predict the class of the target page pointed to by the
links. By representing the target page using the anchor words on all the links that
point to it, plus the headlines that structurally precede the sections where links
occur, the classication accuracy of a rule-learning system (Ripper [5]) improved
by 20%, compared to the baseline performance of the same system when using the
local words in the target page instead.
Slattery and Mitchell [23] also used the WebKB University corpus, but studied
alternative learning paradigms, namely, a First Order Inductive Learner
which exploits the relational structure among Web pages, and a Hubs & Authorities
style algorithm [15] exploiting the hyperlink topology. They found that a combined
use of these two algorithms performed better than using each alone.
Joachims et al. [14] also reported a study using the WebKB University corpus,
focusing on Support Vector Machines (SVMs) with dierent kernel functions. Using
one kernel to represent a document based on its local words, and another kernel
to represent hyperlinks, they give evidence that combining the two kernels leads to
better performance in two out of three classication problems. Their experiments
suggest that the kernels can make more use of the training-set category labels in the
linked neighborhood of a document compared to the local words in that document.
Whereas the work summarized above provides initial insights in exploiting information
in hypertext documents for automated classication, many questions still
remain unanswered. For example, it is not entirely clear why the use of anchor
words improve classication accuracy in Furnkranz's experiments on the WebKB
pages, but the inclusion of all linked words decreased performance in Chakrabarti's
experiments on the IBM patents. Recall that anchor words are a subset of the
linked words (from the in-links). What would happen if Furnkranz expanded the
subset to the full set of linked words in the WebKB pages, or if Chakrabarti et al.
selected a subset (the words from the anchor elds of the in-link pages only, for
example) instead of the full set of words in the IBM patents? How much did the
dierence between the data contribute to the reported performance variance? How
much did the particular algorithms used in those experiments in
uence the obser-
vations? Since most of the experimental results are not directly comparable (even
true for the results on the WebKB corpus because of dierent subsets of documents
and categories used in those experiments), the answers to these questions are not
clear.
In order to draw general conclusions about hypertext classication, we need more
systematic experiments and better analysis about the potential reasons behind the
observed performance variances. As a step in that direction, we begin with hypotheses
about hypertext regularities (Section 2.1), then report systematic examinations
of these hypotheses on the cross product of three data collections (Section 3), three
classication algorithms (Section 2.3), and various representations for hypertext
data (Section 2.2). We also provide direct data analysis in support of our empirical
results with our classiers for the hypertext regularities being tested (Section 4),
leading toward generalizable observations and conclusions (Section 5).
2. Methodology
The purpose of the experiments presented in this paper is to explore various hypotheses
about the structure of hypertext especially as it relates to hypertext clas-
sication. While the scope of the experimental results presented is necessarily conned
to the three classication problems described in Section 3, we hope that the
analysis that follows will help future research into hypertext classication by providing
some ideas about various types of regularities that may be present in other
Table
1. Denitions of ve possible regularities we can use when classifying documents
of class A.
Regularity Denition
None Documents neighboring class A documents exhibit no pattern.
Encyclopedia Documents neighboring class A documents are all of class A.
Co-referencing Documents neighboring class A documents all share
the same class, but are not of class A.
Preclassied A single document points only to all documents of class A.
Meta data Relevant text extracted from sources external to the Web
document, or internal but not visible on that document.
hypertext corpora and how one should construct a classier to take advantage of
them.
2.1. Regularities
Before we can exploit patterns in a hypertext corpus, we need to understand what
kind of regularities to expect. This section presents a list of what we believe are
the simplest kinds of hypertext regularities we might consider searching for.
Of course we still expect the content the document being classied to be a primary
source of information and this list is meant to explore where we might look for more
information in a hypertext classication problem. Succinct denitions of these
regularities are given in Table 1.
2.1.1. No Hypertext Regularity It is important to be aware that for some hypertext
corpora and classication tasks on them, the only useful place to look for
information about the class label of a document is the document itself. In cases like
this, looking outside the document for information about the label is not going to
help and may in some cases hurt classication performance. However, we believe
that for many real-world hypertext classication tasks, extra information is available
to improve upon the performance of a classier which only uses the content of
each document.
2.1.2. Encyclopedia Regularity Perhaps the simplest regularity we might hope
to nd is one where documents with a given class label only link to documents with
the same class label. We might expect to nd approximately this regularity in a
corpus of encyclopedia articles, such as the ETRI-Kyemong encyclopedia corpus
used in [18], since encyclopedia articles generally reference other articles which are
topically similar.
2.1.3. Co-Referencing Regularity Instead of having the same class, neighboring
documents can have some other topic in common with each other. For example,
news articles about a particular current event may link to many articles about
the background for that event. As another example, previous work [22] found
that when learning to classify course home pages, a group of the neighboring pages
about homework assignments were found to be useful, even though those homework
assignment pages were not part of the learning task and not labelled in the data.
It is important to realize that, in general, the topic of these linked documents may
not correspond to any class in the classication problem.
A variant of this regularity relaxes the requirement that all of the neighboring
documents share the same topic. Instead, it may be the case that only some
neighboring documents of a class share the same topic. For example, all faculty
home pages may contain a link to a page describing research interests. If we can
nd this regularity, it can help us with classication. In previous work [12] we
described this type of regularity as a partial co-referencing regularity. This type of
regularity is particularly di-cult to exploit because it requires searching for subsets
of the neighboring documents that have some unseen topic in common.
2.1.4. Preclassied Regularity 1 While the encyclopedia and co-referencing regularities
consider the topic of neighboring documents, there can also be regularities
in the hyperlink structure itself. One such regularity prevalent on the Web consists
of a single document which contains hyperlinks to documents which share the same
topic. Finding this "hub" document would help us in classifying all the documents
that are linked from it. Categories on the Yahoo! topic hierarchy are a perfect
example of this regularity. If a category on the Yahoo! hierarchy happens to corresponds
to a class in our classication problem, then we say that the pages linked to
that category exhibit a preclassied regularity, since in eect the creator of those
hyperlinks has preclassied all the documents of some class for us.
2.1.5. Meta Data Regularity For many classication tasks that are of practical
and commercial importance, meta data are often available from external sources
on the Web that can be exploited in the form of additional features. Examples
of these types of meta data include movie reviews for movie classication, online
discussion boards for various other topic classication tasks (such as stock market
predictions or competitive analysis). Meta data is also implicitly present in many
Web documents in the form of text within tags and within ALT and TITLE tags
(which are not visible when viewing the Web page through most browsers). If we can
extract rich and predictive features from such sources, we can build classiers that
can use them alone or combine them with the hyperlinks and textual information.
2.2. Hypertext Classication Approaches
Depending on which of the above regularities holds for the hypertext classication
task under consideration, dierent classier designs should be considered. Likewise
for given applications, using dierent classier designs, we can search for various
hypertext regularities in the dataset.
In our experiments for this paper, for simplicity we did not distinguish the links
to and from a page. Our examination consists of the following components:
2.2.1. No Hyperlink Regularity With no regularity, we expect no benet from
using hyperlinks and would use standard text classiers on the text of the document
itself. Using such classiers as performance baselines and comparing them to
classiers which take hyperlinks into account will allow us to test for the existence
of various hyperlink regularities.
It is quite possible that for a substantial fraction of hypertext classication tasks
there is no hyperlink regularity that can be exploited and the best performance
we can hope for is with a standard text classier. Indeed in such cases, learners
looking for hyperlink regularities may be led astray and end up performing worse
than a simple text classier.
2.2.2. Encyclopedia Regularity If the encyclopedia regularity holds in a given
data set, then augmenting the text of each document with the text of its neighbors
should produce better classication results because more topic-related words would
be present in the document representation. Chakrabarti et al. applied this approach
to a database of patents and found that classication performance suered,
suggesting that the patent database is unlikely to have this structure [1].
2.2.3. Co-Referencing Regularity If the co-referencing regularity holds, then augmenting
each document as above except treating the additional words as if they
come from a separate vocabulary should help classication. A simple way to do
this is to prex the words in the linked documents with a tag. Chakrabarti et al.
also tried this approach on the patent database and again found that performance
suered, suggesting that the patent database is unlikely to have this structure [1].
If instead we have a partial co-referencing regularity, we need to identify the linked
pages which are topically similar to each other (the \research interests" pages from
the faculty home page example in Section 2.1.3). This can be achieved by computing
the text-based similarity among all the documents linked to documents in the same
class and clustering them accordingly. In contrast to the previous approaches which
can be characterized as using various \bag-of-words" representations and standard
text classication algorithms, this approach requires a more elaborate algorithm.
One such algorithm is the Foil algorithm described in the next section. Craven
et al. [7] applied Foil to the WebKB University corpus and found that it did
improve classication performance, indicating that this corpus does have this kind
of regularity. The cosine-similarity based ltering of linked neighbors by Oh et al.
(Section 1) is another example of utilizing this regularity.
2.2.4. Preclassied Regularity If the classication scheme of our corpus is already
embedded in the hyperlink structure, we have no need to look at the text of
any document. We just need to nd those pages within the hypertext \graph" that
have this property. We can search for those pages by representing each page with
only the names of the pages it links with. If any of these linked pages are correlated
with a class label, a reasonable learning algorithm should be able to recognize it as
a predictive feature and use it (often together with other features) to make classication
decisions. The successful use of the SVM kernel function for hyperlinks by
Joachims et al. (Section 1) illustrated one way of exploiting this regularity. In this
paper we show alternative approaches and use of this regularity with the kNN, NB
and Foil algorithms.
2.2.5. Meta Data Regularity When external sources of information are available
that can be used as meta data, we can collect them, possibly using information
extraction techniques. In particular, we look for features that relate two or more
entities/documents being classied. Following the approaches outlined above for
hyperlinks, these extracted features can then be used in a similar fashion by using
the identity of the related documents and by using the text of related documents
in various ways. Any information source from the Web about the entity being
classied can be used as a meta data resource and the availability and quality of
such resources will certainly depend on the classication task. Cohen [4] described
some experiments where he automatically located and extracted such features for
several (non-hypertext) classication tasks.
We also look for \meta data" contained within Web pages such as META and TITLE
tags. The information contained within these HTML tags in a page is technically
not meta data because it is internal rather than external to the page. Nevertheless,
these tagged elds can be treated dierently from other parts of Web pages and
can a useful source for classication.
2.3. Learning Algorithms Used
Our experiments used three existing classiers: Naive Bayes, kNN and Foil. Naive
Bayes and kNN have been thoroughly evaluated for text classication on benchmark
collections and oer a strong baseline for comparison. Foil is a relational learner
which has shown promise for hypertext classication.
The following notation is used for the descriptions of Naive Bayes and kNN:
{ Set of training documents
{ Set of training documents in class c j
n(t) { Number of training documents containing t
N(t) { Number of occurrences of t
N(t; d) { Number of occurrences of t in document d
2.3.1. Naive Bayes Naive Bayes is a simple but eective text classication algorithm
[16, 17]. The parameterization given by Naive Bayes denes an underlying
generative model assumed by the classier. In this model, rst a class is selected
according to class prior probabilities. Then, the generator creates each word in a
document by drawing from a multinomial distribution over words specic to the
class. Thus, this model assumes each word in a document is generated independently
of the others given the class.
Naive Bayes forms maximum a posteriori estimates for the class-conditional probabilities
for each term in the vocabulary, V , from labeled training data D. This is
done by calculating the frequency of each term t over all the documents in a class
supplemented with Laplace smoothing to avoid zero probabilities:
N(t; d)
N(t; d)
We calculate the prior probability of each class (Pr(c j )) from the frequency of
each document label in the training set.
At classication time we use these estimated parameters by applying Bayes' rule
(using a word independence assumption) to calculate the posterior probability of
each class label (Pr(c j jd)) for a test document d, and taking the most probable
class as the prediction (since all the documents in our datasets used for this study
belong to one and only one class; see Section3 for details):
Y
t2d
2.3.2. K-Nearest Neighbor (kNN) kNN, an instance-based classication method,
has been an eective approach to a broad range of pattern recognition and text
classication problems [8, 25, 26, 28]. In contrast to \eager learning" algorithms
(including Naive Bayes) which have an explicit training phase before seeing any
test document, kNN uses the training documents \local" to each test document to
make its classication decision on that document. Our kNN uses the conventional
vector space model, which represents each document as a vector of term weights,
and the similarity between two documents is measured using the cosine value of the
angle between the corresponding vectors. We compute the weight vectors for each
document using one of the conventional TF-IDF schemes [20], in which the weight
of term t in document d is dened as:
Given an arbitrary test document d, the kNN classier assigns a relevance score
to each candidate category using the following formula:
where set R k (d) are the k nearest neighbors (training documents) of document d. By
sorting the scores of all candidate categories, we obtain a ranked list of categories
for each test document; by further thresholding on the ranks or the scores, we
obtain binary decisions, i.e. the categories above the threshold will be assigned to
the document. There are advantages and disadvantages of dierent thresholding
strategies [26, 27]. In this paper, we use the simplest strategy { assigning the top-ranking
category only to each document as a baseline; for more
exible trade-o
between recall and precision, we further threshold on the scores of the top-ranking
candidates, as described in Section 4.4.
2.3.3. FOIL Quinlan's Foil [19] is a greedy covering algorithm for learning
Horn clauses. It induces each clause by beginning with an empty tail
and using a hill-climbing search to add literals until the clause covers as few negative
instances as possible. The evaluation function used to guide the hill-climbing
search is an information-theoretic measure.
Foil has already been used for text classication to exploit word order [3] and
hyperlink information [7]. Here Foil is used as described in [7], using a unary
has word(page) relation (where word is a variable) for each word and a
link to(page,page) relation for hyperlinks between pages. The former allows
Foil to distinguish informative words from non-informative ones, and the latter
gives Foil the power to recognize predictive links among pages among all the links.
It is a common practice to apply class-driven feature selection to documents before
training a classier, for reducing the computational cost and possibly improving
the eectiveness. However, for relational hypertext classication, this is less than
straightforward (how would we know that words relating to assignments in a linked
page would help to classify course home pages without searching for that regularity
rst?). The experiments presented here use document frequency feature selection.
All the Foil experiments in this paper were done by running the algorithm on
binary subproblems, one for each class in the problem. For each test example, the
system computed a score for each class by picking the matching rule with highest
condence score from each of the learned binary classiers. The condence scores
were based on the training-set accuracies of the rules. This process results in a
list of scored categories for each test example, allowing further thresholding for
classication decisions, as described in the kNN section above. This is perhaps the
simplest approach to combining the outputs of several Foil classiers, and more
elaborate strategies would almost certainly do better.
3. Datasets
To test our proposed approaches to hypertext classication, we needed datasets
that would re
ect the properties of real-world hypertext classication tasks. We
wanted a variety of problems so we could get a general sense of the usefulness of
each regularity described in the previous section.
We found three hypertext classication problems for this study: two of them are
about classication of company Web sites, and the third one is a classication task
for university Web pages.
3.1. Hoovers-28 and Hoovers-255
The Hoovers corpora of company Web pages was assembled using the Hoovers On-line
Web resource (www.hoovers.com) which contains detailed information about a
large number of companies and is a reliable source of corporate information. Ghani
et al. [11] obtained a list of the names and home-page URLs for 4285 companies
on the Web and used a custom crawler to extract information from company Web
sites. This crawler visited 4285 dierent company Web sites and searched up to the
rst 50 Web pages on each site (in breadth rst order), examining just over 108,000
Web pages.50150250350
COUNTS
Hoovers-28
(a) Hoovers-28103050700 50 100 150 200 250
COUNTS
(b) Hoovers-255500150025003500
Other Student Course Faculty Project Staff
COUNTS
Univ-6
(c) Univ-6101000
50 100 150200250
COUNTS
Univ-6
Hoovers-28
(d) All problems, logarithmic scale
Figure
1. Category distributions for all three problems
Two sets of categories are available from Hoovers Online: a coarse classication
scheme of 28 classes (\Hoovers-28", dening industry sectors such as Oil & Gas,
Sporting Goods, Computer Software & Services) and a more ne grained classi-
cation scheme consisting of 255 classes (\Hoovers-255").
These categories label companies, not particular Web pages. For this reason, we
constructed one synthetic page per company by concatenating all the pages (up to
crawled for that company and ignoring the inner links between pages of that
company. Therefore our task for this dataset is Web site classication rather than
Web page classication due to the granularity of the categories in this application.
Figures
1(a) and 1(b) show the category distributions for these two problems.
Previous work with this dataset [11] extracted meta data about these Web sites
from Hoovers Online, which provided information about the company names, and
names of their competitors. The authors constructed several kinds of wrappers
(from simple string matchers to statistical information extraction techniques) to
extract additional information about the relationships between companies from the
Web pages in this dataset, such as whether one company name is mentioned by
another in its Web page, whether two companies are located in the same state (in
U.S.) or the same country (outside of U.S.), and so forth. In the results section,
we only report our experiments using the competitor information because of space
limitations.
The resulting corpora (namely, Hoovers-28 and Hoovers-255) consist of 4,285
synthetic pages with a vocabulary of 256,715 unique words (after removing stop
words and stemming), 7,762 links between companies (1.8 links per company) and
6.0 competitors per company. Each Web site is classied into one category only for
each classication scheme.
3.2. Univ-6 Dataset
The second corpus comes from the WebKB project at CMU [6]. This dataset was
assembled for training an intelligent Web crawler which could populate a knowledge
base with facts extracted directly from the Web sites of university computer science
departments.
The dataset consists of 4,165 pages with a vocabulary of 45,979 unique words
(after removing stop words and stemming). There are 10,353 links between pages
in the corpus (2.5 links per page). Figure 1(c) shows the category distribution and
Figure
1(d) shows how the category distributions for all three problems compare in
logarithmic scales.
The pages were manually labelled into one of 7 classes: student, course, faculty,
project, sta, department and other. The department class was ignored in our
experiments as it had only 4 instances. The most populous class (\other") is a
catch-all class which is assigned to documents (74% of the total) that do not belong
in any of the dened classes of interest.
4. Empirical Validation
We conducted experiments aimed at testing the performance of each of the algorithms
from Section 2.3 using Web page representations based on the discussion
from Section 2.2. Note that each problem may or may not contain any of the regularities
dened previously. Therefore, if a method does not perform well with a
particular representation, it may be interpreted as either that the regularity does
not exist in the task, or as evidence that the method is not well suited for making
an eective use of the representation.
4.1. Experiments
To examine the six possible regularities discussed in Section 2.1, we tested NB,
kNN and Foil with the following representations of Web pages for all the three
datasets (the hypertext regularity being considered is given in parentheses):
Page Only (No Regularity) Use only the words on the pages themselves (used
with NB, kNN, Foil)
Linked Words (Encyclopedia Regularity) Add words from linked pages (used
with NB, kNN; not applicable to Foil)
Tagged Words (Co-Referencing Regularity) Add words from linked pages but
distinguish them with a prex (used with NB, kNN; not needed for Foil)
Tagged Words (Partial Co-Referencing Regularity) Represent Web pages individually
and use a binary relation to indicate links (used with Foil; not applicable
to NB and kNN)
Linked Names (Preclassied Regularity) Represent each Web page by the names
(or identiers) of the Web pages it links to and ignore the words on the Web
page entirely (used with NB, kNN and Foil)
HTML Title (Meta Data Regularity) Use the HTML title of a Web page (used
with NB, kNN and Foil)
HTML Meta (Meta Data Regularity) Use the text found in META tags on a Web
page (used with NB, kNN and Foil).
In addition to the above representations, we also explored the use of the following
representation for the Hoovers experiments:
Competitors (Meta Data Regularity) Use the competitor identiers (aka \com-
petitors") of a company to represent that company instead of the original Web
page (used with NB, kNN, Foil)
All of the results of the experiments are averages of ve runs: each dataset was
split into ve subsets, and each subset was used once as test data in a particular
run while the remaining subsets were used as training data for that run. The split
into training and test sets for each run was the same for all the classiers.
Table
2. Micro-averaged F 1 results for each classier on each representation. Best results for each
dataset with each representation are shown in bold.
Hoovers28 Hoovers255 Univ6
Page Only 55.1 58.1 31.5 32.5 32.0 11.6 69.6 83.0 82.7
Linked Words 40.1 38.6 N/A 18.9 20.4 N/A 74.1 86.2 N/A
Tagged Words 49.2 49.0 31.8 24.0 26.9 12.1 76.3 88.0 86.0
HTML Title 40.8 43.3 28.7 17.9 22.6 11.5 78.6 81.5 86.3
HTML Meta 48.6 49.8 29.3 23.1 28.3 13.1 73.3 78.6 81.5
Linked Names 14.8 13.3 12.3 5.0 5.9 4.6 81.7 87.2 86.6
Competitor Names 75.4 74.5 33.8 52.0 53.0 12.0 N/A N/A N/A
Table
3. Macro-averaged F1 results for each classier on each representation. Best results for each
dataset with each representation are shown in bold.
Hoovers28 Hoovers255 Univ6
Page Only 54.3 55.3 31.6 24.6 19.8 8.0 31.4 46.4 51.3
Linked Words 40.3 35.1 N/A 14.8 12.0 N/A 38.3 53.0 N/A
Tagged Words 49.0 46.5 31.9 17.9 15.9 8.3 46.1 59.1 52.9
HTML Title 37.1 39.9 27.5 10.2 14.5 9.6 43.2 41.8 50.1
HTML Meta 45.1 47.4 29.8 13.8 18.7 10.6 14.1 40.7 39.3
Linked Names 11.8 12.8 9.5 2.4 5.2 3.6 47.0 44.3 62.9
Competitor Names 75.2 74.2 33.7 40.8 44.5 8.3 N/A N/A N/A
4.2. Overall Results
Tables
2, 3 and Figure 2 summarize the main results, where the performance of each
classier is measured using the conventional micro-averaged and macro-averaged
recall, precision and F 1 values [24, 26]. Recall (r) is the ratio of the number of
categories correctly assigned by the system to the test documents to the actual
number of relevant document/category pairs in the test set; precision (p) is the
ratio of the number of correctly assigned categories to the total number of assigned
categories. The F 1 measure is dened to be F recall
and precision in a way that gives them equal weight.
The recall, precision and F 1 scores can rst be computed for individual categories,
and then averaged over categories as a global measure of the average performance
over all categories; this way of averaging is called macro-averaging. An alternative
way, micro-averaging, is to count the decisions for all the categories in a joint pool
and compute the global recall, precision and F 1 values for that global pool. Micro-averaged
scores tend to be dominated by the performance of the system on common
categories, while macro-averaged scores tend to be dominated by the performance
on rare categories if the majority of categories in the task are rare. For skewed
category distributions (Figure 1(c) in Section 3) in our tasks, providing both types
of evaluation scores gives a clearer picture than considering either type alone.
Since our datasets used in this study are single-label-per-document tasks, micro-averaged
accuracy, precision, recall and F 1 are all equal. We therefore report the F 1
score in our micro-averaged results, although all of the above measures can be used
interchangeably. However, in general, the macro-averaged recall, precision and F 1
values are not the same. Further discussions on this issue can be found in the text
categorization literature [26].
All the results reported are for optimal vocabulary sizes for each algorithm. The
eect of feature selection on the categorization performance of our classiers is
analyzed in Section 4.3.
Some general observations can be made from the results of these experiments.
The performance of a classier depends on the characteristics of the problem, the
information encoded in the document representation and the capability of the clas-
sier in identifying regularities in documents. For the Univ-6 problem, all the three
classiers performed better when using hyperlink information (including Linked
Names, Tagged Words and Linked Words) compared to using Page Only. For the
two Hoovers problems, on the other hand, both kNN and NB suered a signi-
cant performance decrease when using hyperlink information, except Competitor
Names, while Foil's performance was not signicantly aected when given information
about hyperlinked documents.
As for our specic hypotheses on hypertext regularities, we have the following
observations:
4.2.1. Page Only The Page Only results tell us something about the overall
di-culty of each task. Unsurprisingly, problems with more classes proved to be
more di-cult. On the Hoovers problems, NB and kNN have roughly equal per-
formance, while Foil's performance is more competitive on Univ-6 than on the
Hoovers datasets.
The big contrast between Foil performance on Univ-6 and its performance on the
Hoovers datasets is surprising, suggesting that Foil may not be as robust or stable
as NB and kNN for conventional text categorization. In particular Foil is known
to have a tendency to overt the training data, since it was designed for learning
logic programs. However, it is interesting that Foil is the only classier (among
these three) with no performance degradation on all the three datasets when using
Tagged Words instead of the Page Only setting, while NB and kNN suered on
Hoovers datasets due to the highly noisy hyperlinks (Section 4.2.2).
4.2.2. Linked Words The kNN and Naive Bayes results under the Linked Words
condition in the Hoovers sets show that performance suers badly when compared
to the baseline. It is quite clear that these datasets do not exhibit this regularity.
A close look at these datasets revealed that 56.8% of the Hoovers pages have links
(1.8 links per page), but, according to the Hoovers-255 labelling, only 6.5% of
the linked pairs of pages belong to the same category. This means that at most
3.7% (i.e., 56.8% 6.5%) of the total pages would be possibly helped when using
the system-assigned category labels to linked pages to reinforce the classication
of those pages. In other words, a \perfect" hyperlink classier (making perfect
use of the category labels of linked pages) would show improved performance in
classifying at most 3.7% of the total pages. Since our classiers are not perfect,
Page
Only
Linked Words
Tagged Words
Title
Meta
Linked Names
Competitor Names
Score
(a) Hoovers-28, micro-avg F 110305070Page
Only
Linked Words
Tagged Words
Title
Meta
Linked Names
Competitor Names
Score
(b) Hoovers-28, macro-avg F 1103050Page
Only
Linked Words
Tagged Words
Title
Meta
Linked Names
Competitor Names
Score
(c) Hoovers-255, micro-avg F1515253545
Page
Only
Linked Words
Tagged Words
Title
Meta
Linked Names
Competitor Names
Score
(d) Hoovers-255, macro-avg F11030507090Page
Only
Linked Words
Tagged Words
Title
Meta
Linked Names
Score
Only
Linked Words
Tagged Words
Title
Meta
Linked Names
Score
(f) Univ-6, macro-avg F1
Figure
2. Performance of classiers on Hoovers-28, Hoovers-255 and Univ-6
dumping the words from linked documents into the document having these links
adds a tremendous amount of noise to the representation of that document. It is
unsurprising, therefore, that the performance of kNN and NB with Linked Words
suered signicantly on the Hoovers datasets. Even Foil, designed for leveraging
relational information in data, gains no improvement by using Tagged Words instead
of Page Only.
In the Univ-6 dataset, on the other hand, 98.9% of pages have links (2.5 links
per page on average), and 22.5% of the linked pairs of pages belong to the same
category. This means that a perfect hyperlink classier would improve performance
on 22.3% of the total pages on Univ-6, which is much higher than the 3.7% of the
Hoovers datasets. As a result, all the classiers have improved results on Univ-6
when using Linked Words or Tagged Words instead of Page Only, indicating that
hyperlinks in this dataset are informative for those classiers.
A potential problem in the algorithm proposed by Oh et al. [18] for exploiting
the Encyclopedia regularity is that it calculates the likelihood for each document
belonging to a certain class by multiplying the system-estimated class probability
(using the words in the Web page) by the fraction of neighbors that are in the same
class. Applying their method to our Hoovers datasets would result in very poor
performance since 96.3% of the Web pages do not have any linked neighbors in the
same class and multiplying such a low probability to the likelihood scores based on
page words will result in a zero or near zero probability for a category candidate
for many documents.
4.2.3. Tagged Words The Tagged Words experiments examine the impact of
the Co-Referencing regularity. Unlike previous results reported by Chakrabarti et
al. [1] where tagging the words from the neighbors (treating them as if they're
from a separate vocabulary) did not aect results, we nd that this document
representation results in signicant performance degradation from the baseline for
two (Hoovers-28 and Hoovers-255) of our problems but signicant performance
improvement from the baseline on one problem (Univ-6). An interesting observation
is that using Tagged Words instead of Linked Words yielded less severe performance
degradation for NB and kNN on the Hoovers datasets, and better performance for
all the classiers on Univ-6. One possible explanation for this is that tagging
linked words allows a feature selection procedure to remove irrelevant linked words
leaving the learner (classier) with less noise in the data (evidence supporting this
hypothesis is shown in Figure 3(e)), and allows the classier to weight linked words
dierently from within-page words when making classication decisions.
Note that if our classiers could handle noise \perfectly", then the performance
would be at least as good as the baseline which ignores link information. This
suggests that the feature selection methods and term weighting schemes we used
with NB and kNN may have problems handling noise which could be overcome
with more training data.
The Foil results under Tagged Words look for partial co-referencing regularities.
Here we see a slight improvement over the result of the baseline Foil for Hoovers-
255, and a small degradation for Hoovers-28 (in contrast to the severe performance
degradations in NB and kNN), indicating that some small partial \co-referencing"
regularity does exist and that Foil was capable of exploiting it with its relational
representation.
4.2.4. Linked Names The Linked Names experiments examine the impact of the
Pre-classied Regularity. Our results show that this representation works surprisingly
well for Univ-6 (which is consistent with the observations by Joachims et
al. [14] for SVMs on another version of the same corpus), but was the worst choice
(for all three classiers) for the Hoovers datasets. Figure 2 shows the clear contrast
that this regularity holds strongly in Univ-6 but not nearly so strongly in
Hoovers datasets. The names or identiers of linked Web pages in Univ-6 are at
least as informative as words (local, linked or tagged) in those pages for the clas-
sication task; however, they do not provide as much information for the Hoovers
classication tasks.
4.2.5. Meta Data Regularity The meta data we report results from is the Competitors
data. We treat the competitor information in the same way as we do the
hyperlinks and use the meta data in two ways: as category labels (Preclassied Reg-
ularity) and as links between pages of the same class (Encyclopedia Regularity). In
the case of the Preclassied Regularity, we use only the names of the competitors
and nd a sharp boost in performance for both Hoovers-28 and Hoovers-255. For
the Encyclopedia Regularity, we use the words from the competitors in the same
way as we use the words from hyperlinked neighbors. Since the two representations
yielded an almost equal performance boost for our classiers, which was reported in
a previous paper [12], we only include the results for Competitor Names in Tables 2,
3 and
Figure
2. Evidently, the competitor information is more useful than any other
representations we examined for classifying the Hoovers Web sites, including the
local text in a page or the linkage among pages. A detailed analysis reveals that
70% of the pairs of competitors share the same class label, which is much higher
than the 6.5% of the hyperlinked pairs sharing the same class. Another point to
note is that using the names of competitors as the hypertext representation had
much smaller vocabulary size than using the words in competitor pages, and thus
making it much more e-cient to train and test the classier based on competitor
names.
As for the other types of meta data, including the text in the HTML meta and
title elds in Web pages, our results show that they are quite informative for the
classication tasks although not as predictive as Competitor Names and Page Only
for all the classiers on the Hoovers datasets, and not as good as Linked Names
and Tagged Words on the Univ-6 dataset. Nevertheless, using these kinds of meta
information in addition to Web pages (and links) can improve the classication
performance than using Web pages alone, which has been shown in our experiments
as reported in a previous paper [12].
4.3. Performance Analysis with respect to Feature Selection
The results discussed so far are for approximately optimal vocabulary sizes we found
in feature selection. For NB we used Information Gain as the feature selection
criterion because our NB system supported this functionality. For kNN we used 2
statistics (after removing the words which occur only once or twice in the training
set) because we found this worked slightly better than Information Gain for our
kNN in a previous study[29]. For Foil we used Document Frequency to rank and
select words, for the reason discussed in Section 2.3.3. Since the original vocabulary
sizes of the Hoovers datasets are very large for some representations (over 300,000
terms for Tagged Words, for example), it would take too long to test each classier
with the full vocabulary sizes for these representations. We then only tested each
classier with subsets of features with increasing sizes until the performance curve
of that classier approached a stable plateau.
Figure
3 shows the curves in micro-averaged F 1 for our classiers with three
representations: Page Only, Linked Words and Tagged Words. We omit the macro-averaged
curves and the results for other representations because of space limita-
tions; we also omit the curves on Hoovers-28 which are similar in shape to those
on Hoovers-255. Notice that we do not have the curves for Foil with the Linked
Words representation because this method always treats words in linked pages as
Tagged Words by denition.
We nd that the observed performance variations over the cross product of the
datasets, representations and classiers are larger than those reported in feature
selection for conventional text categorization[29, 13]. Not only do the performance
curves of our classiers peak with very dierent vocabulary sizes, the shapes of
those curves also show a larger degree of diversity than those previously reported.
Perhaps the highly noisy nature of Web pages makes feature selection more important
for performance optimization in hypertext classiers. The inconsistent shapes
of the curves for NB suggests a potential di-culty in obtaining stable or optimized
performance for this method on highly noisy and heterogeneous hypertext collec-
tions. Foil exhibited smaller performance variances with respect to vocabulary-
size changes, but larger performance dierences across datasets and when switching
from micro-averaged measures to macro-averaged measures.
4.4. Recall-Precision Trade-o
In addition to evaluating our classiers at a single point using the F 1 metric, we also
examined their potential for making
exible trade-os between recall and precision.
In practice, it is important to know whether or not a classier can produce either
high-precision decisions, or conversely high-recall output, depending on the type of
real-world application being addressed. Cross-classier comparison also allows us
to get a deeper understanding of the strengths and weaknesses of our methods in
hypertext classication.
Figure
4 shows the recall-precision trade-o curves for NB, kNN and Foil on
Hoovers-28, Hoovers-255 and Univ-6. The micro-averaged and macro-averaged
MicroAvg
Number of Selected Features
FOIL
(a) Classiers on Hoovers-255 Page Only0.50.70.90 5000 10000 15000 20000 25000
MicroAvg
Number of Selected Features
FOIL
(b) Classiers on Univ-6 Page Only0.050.150.250.350 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000
MicroAvg
Number of Selected Features
(c) Classiers on Hoovers-255 Linked Words0.50.70.90 2000 4000 6000 8000 10000
MicroAvg
Number of Selected Features
(d) Classiers on Univ-6 Linked Words0.050.150.250.350 5000 10000 15000 20000 25000 30000 35000 40000 45000 50000
MicroAvg
Number of Selected Features
FOIL
(e) Classiers on Hoovers-255 Tagged Words0.50.70.90 2000 4000 6000 8000 10000
MicroAvg
Number of Selected Features
FOIL
(f) Classiers on Univ-6 Tagged Words
Figure
3. Performance of classiers with respect to feature selection.
curves are compared side-by-side. For each classier and dataset, we present its
curve for the representation with which the averaged F 1 score of this classier is
optimized. For example, on the Univ-6 dataset, Tagged Words is the choice for
Foil and Linked Names is the choice for kNN and NB. For the Hoovers datasets,
on the other hand, Competitor Names is the choice for all the three classiers.
These curves were generated by thresholding over the system-generated ranks and
scores of the candidate categories for test documents, by the following procedure:
1. All the candidate categories are rst ordered by their rank (R
for each test document) { the higher the rank, the more relevant they are
considered.
2. For the candidate categories with the same rank, their scores are used for further
ordering.
3. Move (automatically) the threshold over the candidate categories, compute the
recall and precision values for each threshold, and interpolate the resulting plots.
A break-even line is shown in each graph for reference, on which the recall and precision
are equally valued, and around which the F 1 values are typically optimized.
The interesting observations from these graphs are:
Both kNN and NB are
exible in trading-o recall and precision from the high-precision
extreme to the high-recall extreme, and in both micro-averaged and
macro-averaged evaluations.
Foil's curves exhibit a large performance variance across datasets. Its curves on
Univ-6 are competitive with kNN's in both micro-averaged and macro-averaged
measures. However, it has di-culty in getting high-recall results for the Hoovers
task. Improving the pruning strategy for rules in Foil and inventing richer
scoring schemes may be potential solutions for this kind of problem, which
requires future investigation. It may also worth mentioning that Univ-6 has
the most skewed distribution of categories, and performance on the largest
class \Other" (miscellaneous) tends to dominate the overall performance of a
classier on Univ-6 if the system is not su-ciently sensitive to rare categories.
The dierent performance curves of these classiers suggest an intriguing potential
of combining these classiers for a classication system that is more
robust than using each method alone. Similar questions have been raised in
both conventional text categorization and hypertext mining [28, 9, 21]; how to
solve it for better hypertext categorization also requires future research.
4.5. Run-Time Observation
Table
4 shows the running times (including training and testing) in CPU minutes
for our classiers in some of the experiments. When using the representations of
Linked Names or Competitor Names, all the classiers were very fast and we omit
MicroAVG
Precision
MicroAvg Recall
kNN (competitor names)
NB (competitor names)
FOIL (competitor names)
break-even line
(a) NB, kNN & FOIL on Hoovers-280.20.61
MacroAVG
Precision
MacroAvg Recall
kNN (competitor names)
NB (competitor names)
FOIL (competitor names)
break-even line
(b) NB, kNN & FOIL on Hoovers-280.20.61
MicroAVG
Precision
MicroAvg Recall
kNN (competitor names)
NB (competitor names)
FOIL (competitor names)
break-even line
(c) NB, kNN & FOIL on Hoovers-2550.20.61
MacroAVG
Precision
MacroAvg Recall
kNN (competitor names)
NB (competitor names)
FOIL (competitor names)
break-even line
(d) NB, kNN & FOIL on Hoovers-2550.20.61
MicroAVG
Precision
MicroAvg Recall
kNN (tagged words)
(linked names)
FOIL (linked names)
break-even line
MacroAVG
Precision
MacroAvg Recall
kNN (tagged words)
(linked names)
FOIL (linked names)
break-even line
(f) NB, kNN & FOIL on Univ-6
Figure
4. Performance in Recall-Precision Trade-o.
Table
4. Average running times in CPU minutes for each algorithm with dierent
representations.
Page Only Linked Words Tagged Words
the time for those runs; on the other hand, for the representations of Page Only,
Linked Words and Tagged Words, the computations were much more intensive due
to the large vocabulary sizes. Table 4 shows the times for the latter. Since we
used dierent machines for running those classiers, the computation times are not
directly comparable, but rather indicative for a rough estimate. On average, Naive
Bayes experiments run faster than kNN while Foil takes considerably longer than
both kNN and Naive Bayes.
5. Concluding Remarks
The rich information typically available in hypertext corpora makes the classi-
cation task signicantly dierent from traditional text classication. In this pa-
per, we presented the most comprehensive examinations to date, addressing some
open questions of how to eectively use hypertext information by examining the
cross product of explicitly dened hypertext regularities (ve), alternative representations
multiple established hypertext datasets (three) from dierent
domains, and several well-known supervised classication algorithms (three). This
systematic approach enabled us to explicitly analyze potential reasons behind the
observed performance variance in hypertext classication, leading toward generalizable
conclusions. Our major ndings include:
The identication of hypertext regularities (Section 2.1) in the data and the
selection of appropriate representations for hypertext in particular domains are
crucial for the optimal design of a classication system. The most surprising
observations are that Linked Names are at least as informative as words (local
words, linked words or target words all together) on Univ6, and that Competitor
Names are more informative than any other alternative representations we
explored on the Hoovers datasets. These observations suggest that Preclassied
Regularity strongly in
uences the learnability of those problems, although for
one problem it appears in hyperlinks, and for the other problem it exhibits in
meta data beyond the Web pages themselves.
The \right" choice of hypertext representation for a real-world problem is crucial
but seldom obvious. Adding linked words (tagged or untagged) to a local page,
for example, improved classication accuracy on the Univ-6 dataset for all three
classiers, but had the opposite eect on the performance of NB and kNN on
the Hoovers datasets (which is consistent with previously reported results by
Chakrabarti et al. and Oh et al. on dierent data). Moreover, Linked Names
and Tagged Words were almost equally informative for all the three classiers
on Univ-6; but this phenomenon was not observed from the experiments on the
Hoovers datasets. Our extensive experiments over several domains show that
drawing general conclusions for hypertext classication without examinations
over multiple datasets can be seriously misleading.
Meta data about Web pages or Web sites can be extremely useful for improving
the classication performance, as shown by the Hoovers Web site classication
tasks. This suggests the importance of examining the availability of meta data in
the real world, and exploiting information extraction techniques for automated
acquisition of meta data. Recognizing useful HTML elds in hypertext pages
and using those elds jointly in making classication decisions can also improve
classication performance, which is evident in our experiments on Univ-6.
kNN and NB, with extended document representations combining within-page
words, linked words and meta data in a naive fashion, show simple and eective
ways of exploiting hypertext regularities. Their simplicity as algorithms allows
them to scale well for very large feature spaces, and their relatively strong performance
(with the \right choice" for hypertext representation) across datasets
makes them suitable choices for generating baselines in comparative studies.
Algorithms focusing on automated discovery of the relevant parts of the hypertext
neighborhood should have an edge over more naive approaches treating
all the links and linked pages without distinction. Foil, with the power for
discovering relational regularities, gave mixed results in this study, indicating
the discovery problem to be non-trivial especially given the noisy nature of links
and inviting future investigation for improving this algorithm and on other algorithms
of this kind.
The use of micro-average and macro-average together is necessary to actually
understand the results and relative performance of classiers. This is of special
signicance for datasets where the category distribution is extremely skewed.
Also, investigating the precision-recall tradeo is important in order to observe
the performance of classiers in specic regions of interest. This issue becomes
essential for hypertext applications where high precision results are extremely
important.
While the scope of the experimental results presented is necessarily conned to
our datasets, we hope that our analysis will help future research into hypertext
classication by dening explicit hypotheses for various types of regularities that
may be present in a hypertext corpus, by presenting a systematic approach to
the examination of those regularities, and by providing some ideas about how one
should construct a classier to take advantage of each.
Acknowledgments
The authors would like to thank Bryan Kisiel for his signicant help in improving
the evaluation tools and applying them to the output of dierent classiers, and
Thomas Ault for editorial assistance. This study was funded in part by the National
Science Foundation under the grant number KDI-9873009.
Notes
1. Thorsten Joachims provided the name and the intuition behind this regularity (personal
communication).
--R
Enhanced hypertext categorization using hyperlinks.
New York
Bringing order to the Web: automatically categorizing search results.
Learning to Classify English Text with ILP Methods.
Automatically extracting features for concept learning from the web.
Learning to classify English text with ILP methods.
Nearest Neighbor (NN) Norms: NN Pattern Classi
Multistrategy learning for information extraction.
Johannes F
Data mining on symbolic knowledge extracted from the web.
Hypertext categorization using hyperlink patterns and meta data.
Text categorization with support vector machines: learning with many relevant features.
Composite kernels for hyper-text categorisation
Authoritative Sources in a Hyperlinked Environment.
A comparison of event models for naive Bayes text classi
A practical hypertext categorization method using links and incrementally available class information.
Learning logical de
weighting approaches in automatic text retrieval.
Combining statistical and relational methods for learning in hypertext domains.
Information Retrieval.
Expert network: e
An evaluation of statistical approaches to text categorization.
A study on thresholding strategies for text categorization.
Combining multiple learning strategies for e
A comparative study on feature selection in text cate- gorization
--TR
Term-weighting approaches in automatic text retrieval
Expert network
Enhanced hypertext categorization using hyperlinks
Authoritative sources in a hyperlinked environment
Bringing order to the Web
A practical hypertext catergorization method using links and incrementally available class information
Learning to construct knowledge bases from the World Wide Web
An Evaluation of Statistical Approaches to Text Categorization
A study of thresholding strategies for text categorization
Information Retrieval
Learning Logical Definitions from Relations
Naive (Bayes) at Forty
First-Order Learning for Web Mining
Text Categorization with Suport Vector Machines
Composite Kernels for Hypertext Categorisation
A Comparative Study on Feature Selection in Text Categorization
Multistrategy Learning for Information Extraction
Combining Multiple Learning Strategies for Effective Cross Validation
Automatically Extracting Features for Concept Learning from the Web
Discovering Test Set Regularities in Relational Domains
Hypertext Categorization using Hyperlink Patterns and Meta Data
Exploiting Structural Information for Text Classification on the WWW
Combining Statistical and Relational Methods for Learning in Hypertext Domains
--CTR
Hang Su , Qiaozhu Mei, Template extraction from candidate template set generation: a structure and content approach, Proceedings of the 43rd annual southeast regional conference, March 18-20, 2005, Kennesaw, Georgia
Yong-Hong Tian , Tie-Jun Huang , Wen Gao, Two-phase Web site classification based on Hidden Markov Tree models, Web Intelligence and Agent System, v.2 n.4, p.249-264, December 2004
Qingyang Xu , Wanli Zuo, Extracting Precise Link Context Using NLP Parsing Technique, Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, p.64-69, September 20-24, 2004
Youngjoong Ko , Jungyun Seo, Text categorization using feature projections, Proceedings of the 19th international conference on Computational linguistics, p.1-7, August 24-September 01, 2002, Taipei, Taiwan
Youngjoong Ko , Jinwoo Park , Jungyun Seo, Automatic text categorization using the importance of sentences, Proceedings of the 19th international conference on Computational linguistics, p.1-7, August 24-September 01, 2002, Taipei, Taiwan
David Jensen , Jennifer Neville , Brian Gallagher, Why collective inference improves relational classification, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Yiyao Lu , Hai He , Qian Peng , Weiyi Meng , Clement Yu, Clustering e-commerce search engines based on their search interface pages using WISE-cluster, Data & Knowledge Engineering, v.59 n.2, p.231-246, November 2006
Youngjoong Ko , Jungyun Seo, Learning with unlabeled data for text categorization using bootstrapping and feature projection techniques, Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, p.255-es, July 21-26, 2004, Barcelona, Spain
Victor Fresno , Angela Ribeiro, An Analytical Approach to Concept Extraction in HTML Environments, Journal of Intelligent Information Systems, v.22 n.3, p.215-235, May 2004
Ludovic Denoyer , Jean-Nol Vittaut , Patrick Gallinari , Sylvie Brunessaux , Stephan Brunessaux, Structured multimedia document classification, Proceedings of the ACM symposium on Document engineering, November 20-22, 2003, Grenoble, France
Jinsuk Kim , Myoung Ho Kim, An Evaluation of Passage-Based Text Categorization, Journal of Intelligent Information Systems, v.23 n.1, p.47-65, July 2004
Aixin Sun , Ee-Peng Lim , Wee-Keong Ng, Web classification using support vector machine, Proceedings of the 4th international workshop on Web information and data management, November 08-08, 2002, McLean, Virginia, USA
Yiming Yang, A study of thresholding strategies for text categorization, Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval, p.137-145, September 2001, New Orleans, Louisiana, United States
Aixin Sun , Ee-Peng Lim, Web unit mining: finding and classifying subgraphs of web pages, Proceedings of the twelfth international conference on Information and knowledge management, November 03-08, 2003, New Orleans, LA, USA
J. Glover , Kostas Tsioutsiouliklis , Steve Lawrence , David M. Pennock , Gary W. Flake, Using web structure for classifying and describing web pages, Proceedings of the 11th international conference on World Wide Web, May 07-11, 2002, Honolulu, Hawaii, USA
Yiming Yang , Jian Zhang , Bryan Kisiel, A scalability analysis of classifiers in text categorization, Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, July 28-August 01, 2003, Toronto, Canada
Youngjoong Ko , Jungyun Seo, Using the feature projection technique based on a normalized voting method for text classification, Information Processing and Management: an International Journal, v.40 n.2, p.191-208, March 2004
Hyoungdong Han , Youngjoong Ko , Jungyun Seo, Using the revised EM algorithm to remove noisy data for improving the one-against-the-rest method in binary text classification, Information Processing and Management: an International Journal, v.43 n.5, p.1281-1293, September, 2007
Adriano Veloso , Wagner Meira, Jr. , Marco Cristo , Marcos Gonalves , Mohammed Zaki, Multi-evidence, multi-criteria, lazy associative document classification, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Malik Agyemang , Ken Barker , Rada S. Alhajj, Mining web content outliers using structure oriented weighting techniques and N-grams, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Youngjoong Ko , Jinwoo Park , Jungyun Seo, Improving text categorization using the importance of sentences, Information Processing and Management: an International Journal, v.40 n.1, p.65-79, January 2004
Jian-Tao Sun , Ben-Yu Zhang , Zheng Chen , Yu-Chang Lu , Chun-Yi Shi , Wei-Ying Ma, GE-CKO: A Method to Optimize Composite Kernels for Web Page Classification, Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, p.299-305, September 20-24, 2004
Chun-Yan Liang , Li Guo , Zhao-Jie Xia , Feng-Guang Nie , Xiao-Xia Li , Liang Su , Zhang-Yuan Yang, Dictionary-based text categorization of chemical web pages, Information Processing and Management: an International Journal, v.42 n.4, p.1017-1029, July 2006
Lisa Getoor , Nir Friedman , Daphne Koller , Benjamin Taskar, Learning probabilistic models of link structure, The Journal of Machine Learning Research, 3, 3/1/2003
Ludovic Denoyer , Patrick Gallinari, Bayesian network model for semi-structured document classification, Information Processing and Management: an International Journal, v.40 n.5, p.807-827, September 2004
Evgeniy Gabrilovich , Shaul Markovitch, Text categorization with many redundant features: using aggressive feature selection to make SVMs competitive with C4.5, Proceedings of the twenty-first international conference on Machine learning, p.41, July 04-08, 2004, Banff, Alberta, Canada
Sangjun Kim , Euiho Suh , Keedong Yoo, A study of context inference for Web-based information systems, Electronic Commerce Research and Applications, v.6 n.2, p.146-158, Summer 2007
Thierson Couto , Marco Cristo , Marcos Andr Gonalves , Pvel Calado , Nivio Ziviani , Edleno Moura , Berthier Ribeiro-Neto, A comparative study of citations and links in document classification, Proceedings of the 6th ACM/IEEE-CS joint conference on Digital libraries, June 11-15, 2006, Chapel Hill, NC, USA
Andrej Bratko , Bogdan Filipi, Exploiting structural information for semi-structured document categorization, Information Processing and Management: an International Journal, v.42 n.3, p.679-694, May 2006
D. Michael Cai , Maya Gokhale , James Theiler, Comparison of feature selection and classification algorithms in identifying malicious executables, Computational Statistics & Data Analysis, v.51 n.6, p.3156-3172, March, 2007
J. R. G. Pulido , R. Herrera , M. Archiga , A. Block , R. Acosta , S. Legrand, Identifying ontology components from digital archives for the semantic web, Proceedings of the 2nd IASTED international conference on Advances in computer science and technology, p.7-12, January 23-25, 2006, Puerto Vallarta, Mexico
Pvel Calado , Marco Cristo , Edleno Moura , Nivio Ziviani , Berthier Ribeiro-Neto , Marcos Andr Gonalves, Combining link-based and content-based methods for web document classification, Proceedings of the twelfth international conference on Information and knowledge management, November 03-08, 2003, New Orleans, LA, USA
Xin Li , Hsinchun Chen , Zhu Zhang , Jiexun Li, Automatic patent classification using citation network information: an experimental study in nanotechnology, Proceedings of the 2007 conference on Digital libraries, June 18-23, 2007, Vancouver, BC, Canada
Dmitry Davidov , Evgeniy Gabrilovich , Shaul Markovitch, Parameterized generation of labeled datasets for text categorization based on a hierarchical directory, Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval, July 25-29, 2004, Sheffield, United Kingdom
Jan Bakus , Mohamed S. Kamel, Higher order feature selection for text classification, Knowledge and Information Systems, v.9 n.4, p.468-491, April 2006
Baoping Zhang , Yuxin Chen , Weiguo Fan , Edward A. Fox , Marcos Gonalves , Marco Cristo , Pvel Calado, Intelligent GP fusion from multiple sources for text classification, Proceedings of the 14th ACM international conference on Information and knowledge management, October 31-November 05, 2005, Bremen, Germany
Yonghong Tian , Tiejun Huang , Wen Gao, Latent linkage semantic kernels for collective classification of link data, Journal of Intelligent Information Systems, v.26 n.3, p.269-301, May 2006
Nayer M. Wanas , Dina A. Said , Nadia H. Hegazy , Nevin M. Darwish, A study of local and global thresholding techniques in text categorization, Proceedings of the fifth Australasian conference on Data mining and analystics, p.91-101, November 29-30, 2006, Sydney, Australia
A. Georgakis , H. Li, User behavior modeling and content based speculative web page prefetching, Data & Knowledge Engineering, v.59 n.3, p.770-788, December 2006
Using web structure and summarisation techniques for web content mining, Information Processing and Management: an International Journal, v.41 n.5, p.1225-1242, September 2005
Lise Getoor, Link mining: a new data mining challenge, ACM SIGKDD Explorations Newsletter, v.5 n.1, July
Brian D. Davison, Predicting web actions from HTML content, Proceedings of the thirteenth ACM conference on Hypertext and hypermedia, June 11-15, 2002, College Park, Maryland, USA
Fabrizio Sebastiani, Machine learning in automated text categorization, ACM Computing Surveys (CSUR), v.34 n.1, p.1-47, March 2002 | hypertext classification;web mining;text mining;machine learning |
607698 | Grain Filters. | Motivated by operators simplifying the topographic map of a function, we study the theoretical properties of two kinds of grain filters. The first category, discovered by L. Vincent, defines grains as connected components of level sets and removes those of small area. This category is composed of two filters, the maxima filter and the minima filter. However, they do not commute. The second kind of filter, introduced by Masnou, works on shapes, which are based on connected components of level sets. This filter has the additional property that it acts in the same manner on upper and lower level sets, that is, it commutes with an inversion of contrast. We discuss the relations of Masnou's filter with other classes of connected operators introduced in the literature. We display some experiments to show the main properties of the filters discussed above and compare them. | Introduction
Filters used to simplify an image and satisfying a minimal set of invariance
properties are scarce. Actually, only one of them has the maximal
set of invariance properties, and it is driven by the parabolic partial
dierential equation [1, 26, 18]:
@t
where curvu(x) is the curvature of the level line of u at the point x,
this being restricted to the regular points of u.
However, the previous lter is optimal (in terms of invariance) among
regular lters, that is, lters driven by a P.D.E. This property
of regularity, while desirable in theory, has the drawback of modifying
all contours, and, in particular, of destroying T -junctions, which are
important clues for occlusion [5, 4]. If we drop this requirement, a bunch
of other lters satisfying the same invariance properties are available.
Motivated by the study of a family of lters by reconstruction [9,
10, 21, 31, 32], Serra and Salembier [30, 25] introduced the notion of
http://pascal.monasse.free.fr
c
2001 Kluwer Academic Publishers. Printed in the Netherlands.
V. Caselles and P. Monasse
connected operators. Such operators simplify the topographic map of
the image. These lters have become very popular in image processing
because, on an experimental basis, they have been claimed to simplify
the image while preserving contours. This property has made them very
attractive for a large number of applications, such as noise cancellation
or segmentation [17, 33]. More recently, they have become the basis of a
morphological approach to image and video compression [23, 24, 22, 6].
Dierent classes of connected operators have been studied by Meyer
[15, 16], Serra [29] or Heijmans [8] (see also references therein).
In this article, we study the theoretical properties of two kinds of
connected operators: the extrema lters and the \shape" lters. Each
of them simplies the topographic map of the image, but with dierent
senses given to the term topographic map. The maxima lter removes
connected components of upper level sets of insu-cient area, while
keeping the other ones identical [31, 32]. This ensures that regional
maxima of the ltered image have a minimal grain size. Similarly,
the minima lter removes too small connected components of lower
level sets. For these lters, the \grain" corresponds to a connected
component of a level set, and small grains are considered as noise. This
can be seen as the pruning of the tree of the connected components of
upper, or lower, level sets.
In a previous work [3], we introduced the notion of \shapes", designed
to deal symmetrically with upper and lower level sets. The
shapes are also organized in a tree, driven by inclusion. When applied
to images of positive minimal grain size, we showed that the structure
of this tree is nite. As shown here, any image resulting from the
application of the extrema lters has this property. This new tree also
provides the denition of another grain lter, for which the grain is a
shape [13, 19, 20]. It removes small shapes while preserving the ones
of su-cient area. The essential improvement over the extrema lters is
that it deals in the same manner with upper and lower level sets. In
the vocabulary of mathematical morphology [14, 27, 28], this lter is
selfdual when applied to continuous functions.
The present article is organized as follows. Section 2 recalls the foundations
of mathematical morphology and underlines the link between
morphological lters and set operators. Section 3 introduces the main
properties of extrema lters and proves them. In Section 4 we prove
the analogous properties for the grain lter and, in particular, that it
is a self-dual lter on continuous functions, generalizing a result of [2].
Finally, in Section 5 we illustrate these lters with an experiment.
2. General results from mathematical morphology
Throughout this section, we shall consider real functions dened in a
subset of IR N . If u is a real function dened on D IR N , we denote
by [u ] the set fx 2 D : u(x) g, 2 IR. Similarly, we dene the
sets [u > ], [u ], [u < ].
2.1. Level sets
If u is any real function, and X
which implies, in particular, that
Conversely, if X is a family of sets satisfying
then X is the level set at level of the function u dened by
namely, X
Under the weaker hypothesis of monotonicity of (X ) 2IR , Guichard
and Morel show in [7] that X a.e. and for almost every 2 IR.
2.2. Contrast invariance
A contrast change, in the restrictive sense, is a strictly increasing continuous
IR. It is therefore a homeomorphism of IR onto
an open interval of IR. A direct consequence is
For an image u, the contrast change g applied to u is g-u. A contrast
change g will indierently be considered as a function dened on IR or
as an operator acting on functions u. A direct consequence of (1) is
which can also be written as
4 V. Caselles and P. Monasse
showing that the families of level sets of g - u and of u are the same,
only their level changes.
A morphological lter ~
T is a map acting on functions u that commutes
with any contrast change: g - ~
g.
2.3. Link between set operator and morphological filter
If T is an operator acting on sets, a necessary requirement for T to
transform the level sets of a function into the level sets of a function is
thus
The lter ~
T associated to T is dened by
or equivalently
~
If we denote by B then we observe ([11, 12]) that
~
Indeed, let ~
be the right hand side term of this equality. If is
such that x 2 T ([u ]), by denition of B x we deduce that [u
on the other hand, since
u(y)
we get immediately ~
Tu(x). Conversely, if
u(y) n
That is, B n [u 1=n] and, therefore,
Taking the intersection over all n, we get that x 2 ~
~
We say that contrast change if g is nondecreasing
and upper semicontinuous. For a general contrast change, we
have that
where
We use the convention
If T is only dened on closed sets and satises (2) when the sets
F n are closed, then ~
T is dened on upper semicontinuous functions.
Then it is easy to check that ~
T and g commute when applied to upper
semicontinuous functions. We get contrast invariance in a strong sense,
since g can have constant stretches, and needs not be continuous.
2.4. Spatial invariance properties
If D IR N , f : D ! D is a map and ~
T a morphological lter, whose
associated set operator is T , and whose structuring elements are given
by the family B x at x, we denote by f ~
T the morphological operator
dened by (f ~
and we dene ~
by ( ~
is an image.
g. If f(B x is easy to see
that f ~
. Then we say that T is invariant with respect to f . For
instance, if B
T is translation invariant.
The lters we shall discuss below satisfy the property that their
respective family of structuring elements is globally invariant with respect
to any area preserving map, i.e., a special a-ne map. These lters
are thus special a-ne invariant. They depend on a parameter ". If
x
is the set of structuring elements of such a lter at x, they also satisfy
that B s"
x for any s > 0. In particular, this implies that, for any
a-ne map f , we have
f ~
3. Extrema lters
3.1. Definition
Extrema lters are constructed in such manner that the connected components
of level sets of an image have a minimum area. We call them
extrema lters because a connected component of level set contains
a regional extremum. This is achieved in two steps: rst, connected
components of upper level sets are ltered, then lower level sets. We
dene the set operators ensuring such properties.
Let us rst x some notation.
Let
be a set homeomorphic to the
closed unit ball of IR N (N 2),
and
be the
6 V. Caselles and P. Monasse
interior of
Note that, in
particular,
is compact, connected and
locally connected.
Moreover,
is unicoherent, i.e., for any A; B
closed connected sets such that
A\B is connected. For a set
X, we denote cc(X) any of its connected components and by cc(X; x)
the one containing the point x, provided x 2 X, and by extension
and C is connected, cc(X; C) is
cc(X; x), with x 2 C.
" be a parameter, representing an area threshold, and let X be
a subset
of
. We dene the lters
"g. We dene the maxima lter M
" and
the minima lter M " by
sup
We will show that they are morphological lters, whose associated set
operators are M " and M 0
" , respectively. The denitions are voluntarily
not symmetric, so that both can act on (upper) semicontinuous func-
tions. To avoid the cases where
suppose that " <
j.
3.2. Preliminary results
LEMMA 1. If (C n ) n2IN is a nonincreasing sequence of compact sets
and
If (O n ) n2IN is a nondecreasing sequence of open sets and O
then
Proof. It is clear that cc(C; x) cc(C n ; x) for any n. Conversely,
is an intersection of continua, thus, it is a continuum.
Since it contains x and is included in C, we get the other inclusion.
For any n, cc(O
On the
other hand, O being open
and
locally connected, cc(O; x) is an open
set. Hence, for any y 2 cc(O; x), there is some continuum K y cc(O; x)
containing x and y. Since K y S
O n and it is a compact set, we
can extract a nite covering of K y , and as the sequence (O n ) is non-
decreasing, there is some n such that K y O n . Since K y is connected
and contains x, we have that y 2 K y cc(O n ; x). We conclude that
PROPOSITION 1. We have the following properties for M " and M 0
" are nondecreasing on subsets
of
.
upper semicontinuous on compact sets: being a non-increasing
sequence of compact sets, then M "
" is lower semicontinuous on open sets: (O n ) n0 being a nondecreasing
sequence of open sets, then M 0
" O n .
Proof. Property (i) is a direct consequence of the denitions.
" is monotone, we have that M " F
Applying Lemma 1, we observe that
cc(F; x). In particular,
O n . Since M 0
" is monotone, S
" O. Now, let
" O. Then is such that jU j > ". Let U
Lemma 1 proves that
". Hence for n large enough, jU n j > ". We
conclude that x 2 U n M 0
" O n .
If A and B are two families of sets, we say that A is a basis of B if
A B and for any B 2 B, there is some A 2 A such that A B.
2. B " is a basis of fX
" is a basis of
" Xg.
Proof. This is a direct consequence of the denitions.
COROLLARY 1. Applied to upper semicontinuous functions, M
morphological lter whose associated set operator is M "
8 V. Caselles and P. Monasse
precisely, for all ,
"
Proof. Let Xg. A consequence of Proposition 1
we have (
u(y)
We now use the fact that B " is a basis of C " , as shown in Lemma 2. As
we deduce that
u (x) sup
there is some
u(y) inf
"
and by taking the supremum over all B, we get
sup
u
A similar proof applies to link M " and M 0
" .
PROPOSITION 2. Let u; v
IR. Then
"
(ii) If u v, then M
"
" v.
The proofs are immediate and we will not include the details.
COROLLARY 2. Let u
IR be such that u n ! u uniformly
in
. Then M
" uniformly
in
.
Proof. Given - > 0, let n 0 be such that u - u n
Using Proposition 2, (ii), (iii) and (iv), we obtain
for all n n 0 . Hence uniformly
in
. Similarly,
we prove that M +
" uniformly
in
.
3.3. Properties
PROPOSITION 3. If u is an upper semicontinuous function, also are
" u. If u is continuous, also are M
" u.
Proof. Let u be an upper semicontinuous function. Then, for any
" As [u ] is a closed set, its connected
components are closed. Since M " [u ] is a nite union of some of
them, it is closed. Thus M
" u is upper semicontinuous.
In the same manner, [M " u <
is an
open set, its connected components are also open, and its image by M 0
is thus a union of open sets, which is open. This proves that M " u is
upper semicontinuous.
Finally, suppose that u is continuous. We just have to prove that
" closed for any 2 IR. Using Corollary 1, we write
"
"
We claim that
If
C). Since O is open and contains the closed set C, we deduce that
O n C is open and not empty, and, thus, of positive measure. Hence
jOj > jCj ". This proves that
be such that jOj > ". Then, thanks to
Lemma 1, we have
V. Caselles and P. Monasse
There is some n such that
". This proves the remaining
inclusion in (). The right hand side of this equality being open, we
conclude that [M
" closed.
To prove the same result for M " , we write
We claim that the last set coincides with "g. If
O n is open and C is closed, we have that " jCj < jO n j. Hence,
n ] for all n. Then we have
Applying Lemma 1, we get
yielding
Hence jcc([u ];
This proves that
and, thus, the equality of both terms. The right hand side term, being
a nite union of closed sets, it is also closed.
PROPOSITION 4. When restricted to upper semicontinuous function-
s, M
" is idempotent.
Proof. Let u be an upper semicontinuous function and 2 IR.
Clearly, we have . Applying this equality to the set
[u ], and using Corollary 1, we get
"
"
Now, thanks to Proposition 3, we have that [M
" closed and
we can apply again Corollary 1 to the left hand side of this equality to
obtain
"
"
Since this equality holds for any 2 IR, we conclude that M +
" .
The same proof applies to M " .
PROPOSITION 5. Let u 2
Proof. (i) Both cases being similar, it will be su-cient to prove the
rst part of the assertion. Let
" u(x). Observe that v
is lower semicontinuous. Assume that M := max
0be such that
contains an open set, hence,
Letting choosing
Mwhich is a contradiction. We conclude that and the proposition
is proved.
(ii) Both cases being similar, we shall only prove that M +
uniformly as " ! 0+. For that, let us write
"
"
Given - > 0, let " 0 > 0 be small enough so that
V. Caselles and P. Monasse
and
"
"
i.e.,
Collecting these facts we have that
3.4. Interpretation
Let u be an upper semicontinuous function and
jCj ", we have thus, C is a connected component
of
" C is not
a connected component of [M
" u ]: it does not even meet this set,
since [M
" so that
"
Conversely, if
" u ]), we have
being a connected component of [u ], and since C 6= ;,
we have
Summing up these remarks, we can see that the connected components
of [M
" are exactly the connected components of [u ] of
measure ". In particular, since the connected components of upper
level sets have a structure of tree driven by inclusion, the tree of M
"
is the tree of u pruned of all nodes of insu-cient measure.
The same observations can be made concerning M " and the connected
components of [u < ]. The tree of M " is the tree of connected
components of lower level sets of u pruned of all nodes of insu-cient
measure.
Summarizing the above discussion, we have the following result.
PROPOSITION 6. If
" u ]) 6= ;, then jXj . If jcc([u ])j (resp.
" u < ])j
In particular, the above result implies that the connected components
of [M
". The
same thing can be said of connected components of the upper and lower
level sets of
" u.
3.5. Composition
LEMMA 3. Let u be an upper semicontinuous function and 2 IR.
Then
Proof. Let
" u < ]). If the consequence was false, we
would have C [u ]. If
then we have
"
since
This contradicts the denition of C. If C
, C being
open, we have C ) C and @C [M
"
Then x belongs to a connected component D of [u ] such that
connected and included in [u ]. As
jD[Cj ", we get D[C [M
" contradicting the denition of
C. This proves our claim that there is some x 2 C such that u(x) < .
arguing as above, we have
which contradicts the
hypothesis. Thus, we may assume that C
. Let
Since C is closed and D open, then C ( D. Thus there is some x in
meaning that
jcc([u < ]; x)j > ". This component must be D and, therefore jDj > ".
Finally, we observe that D [M " u < ], which is a contradiction.
THEOREM 1. The operators M
(i) transform upper semicontinuous functions into upper semicontinuous
functions, and continuous functions into continuous functions;
(ii) are idempotent on upper semicontinuous functions.
Proof. (i) is a direct consequence of the equivalent properties we
have proved for
" and M " in Proposition 3.
As a consequence of Proposition 2, we have
"
any function u. Applying this to M
of u, we get
14 V. Caselles and P. Monasse
We apply the rst part of Lemma 3 to
that
We nd a point x 2 C \ [v < ]. Let
We know that jDj ". Since M
that D [M
and D is connected, we have
This proves that
thus, we have the
equality
" to each member and
using its idempotency, we conclude that
With a similar proof, using the second part of Lemma 3, we prove
that
" is idempotent.
4. Grain lter
4.1. Definitions
The problem with the extrema lters presented above, is that we have
two operators which act on both upper and lower level sets. In general,
they do not commute. Moreover, none of them has the property to deal
symmetrically with upper and lower level sets. Actually, they work
in two steps: rst upper, then lower level sets are treated (or in the
opposite order). That is, these operators are not selfdual.
DEFINITION 1. A morphological lter ~
T , associated to the set operator
T , is said to be selfdual on continuous functions if the following
equivalent properties hold for any continuous function u
1. ~
2. 8x, sup B2B inf y2B describing the
structuring elements of ~
T .
3. 8, T [u
To show the equivalence of the rst two properties, it su-ces to write
the second one with u instead of u. By taking instead of , we
can see that the third property is equivalent to
Now, using the contrast invariance of ~
amounts to write
which means exactly that ~
Tu.
We recall the denition and the essential properties of a saturation
operator.
DEFINITION 2. Let p 1 be a xed point
of
. For X
, we call
holes of X the connected components
of
X. The external hole of
X is
X. The other holes are the internal holes
of X. The saturation of X, Sat(X), is the union of X and its internal
holes.
The following results are proved in [3]:
PROPOSITION 7.
1.
2.
3. X connected ) Sat(X) connected and
connected
4. X open (resp. closed) ) Sat(X) open (resp. closed).
5.
closed
6. @Sat(X) is a connected subset of @X.
7. H internal (resp. external) hole of X
8. are nested or disjoint.
9. Let X be open or closed,
X) such that x 2 Sat(O) Sat(C).
If u is an upper semicontinuous function, we call shapes of u the elements
of
The tree structure of u is expressed by the properties [3]:
.
are nested or disjoint.
V. Caselles and P. Monasse
We restrict the denition of our set operator to closed sets, since
later we will dene our lter on upper semicontinuous images. If K is
a compact set, we dene
f
internal hole of C; jC 0
j, we clearly have that G " compact K. Thus
we will always suppose that "
j.
Remark. We have dened the grain lter for compact sets since this
will be su-cient for the extension of this lter to upper semicontinuous
functions in
Obviously, the denition has sense for any subset of
4.2. Preliminary results
LEMMA 4. G " is nondecreasing on compact sets.
Proof. Let K L be compact sets. Let be such that
be the family of internal holes of C such that
". Then we observe that there is some C
On the other hand, if H 0 is an internal hole of C 0 such that
included in a hole H of C, and, thus, jHj > ".
From these observations we deduce that
i , are the families of internal holes of C, resp. C 0 ,
whose measure is > ". We conclude that G " K G " L.
LEMMA 5. If K is compact, then
Proof. Let us denote by (C i ) i2I the family of connected components
of K and let I " I be the set of indices for which jSat(C i )j ".
monotone, we have that S
the
other inclusion is immediate from the denition, we have the identity
We also observe that G " C i , if not ;, is the union
of C i and some of its internal holes and, thus, it is connected.
Let J be a subset of I " whose cardinal is at least 2, and let
. We now prove that D is not connected.
not connected, there is an open and closed
subset E 0 in D 0 dierent from ; and D 0 . Observe that E 0 is an union
of C j , since each of those sets is connected. To be precise, let us write
We prove that E
is open and closed in D, hence, D is not connected.
Clearly, we have that ; 6= E ( D. Since E 0 is open in D 0 , there is
an open set U 0 such that U 0 \ D
the union of the connected components of U 0 that meet some C l , l 2 L.
The set U 00 , as a union of open sets, is open, and the set U , as the
union of U 00 and some internal holes of C l , which are all open, is also
open. If j 2 J nL, we have that E\G "
Indeed, if the last equality does not hold, we would have
There is a connected component O of U 0 meeting some C l , l 2 L (by
denition of U 00 ), and C j . Indeed, if it meets some hole H of C j with
jHj ", then H cannot not contain C l . Otherwise, since H n Sat(C l )
is open and not empty, thus of positive measure, we would have jHj >
l )j ". Thus O is connected and meets two dierent connected
components
of
and the component containing C l ), hence
O \ C j 6= ;. This implies that U 0 \ C j 6= ;, contradicting the identity
We conclude that () does not hold, i.e., U 00 \G "
and therefore U 00 \ (D n E) = ;, proving that U \ We have
shown that the set E is open in D. Applying the same argument to
of D we prove that E is also closed in D.
LEMMA 6. Let (C n ) n2IN be a nonincreasing sequence of continua and
n C n . If H is a hole of C, there exists n 0 2 IN and a non-decreasing
sequence being a hole of C n , such that
Proof. If x 2 H, then there is some n 0 such that x 62 C n for n n 0 .
Thus, if n n 0 , x is in some hole H n of C n . Obviously, H n H, and
therefore S H n H.
U be a neighborhood of y and V be a connected
neighborhood of y such that V U . Since y 2
H n there is some
and, by monotonicity of (H n ), we may write
nnn H n , we also have that
V. Caselles and P. Monasse
From () and (), the connectedness of V implies that
It follows that V \ C 6= ;, which implies
that U \ C 6= ;. This being true for any neighborhood U of y, we get
that y 2
@
This implies that H \ @
H n is closed
in H. Since it is also open, as a union of open sets, the connectedness
of H implies that
THEOREM 2. The operator G " is upper semicontinuous, i.e., if (K n )
is a nonincreasing sequence of compact sets, then
Proof. The inclusion of the left hand side term into the right hand
side one is due to the monotonicity of G " . We just have to show the
other inclusion.
As of Lemma 5, for any n, there is some
. For each n 1, C n is inside some
implying, due to Lemma 5, that This shows that the sequence
of continua (C n ) is nonincreasing, so that their intersection C
is a continuum, thanks to Zoretti's theorem. Since C is contained in
some component of T
remains to show that x 2 G " C.
By taking H
proves that H
so that
. Thus jSatCj ". Moreover, if x belongs to
some internal hole H of C, since x 2 G " C n , the associated sequence H n
obtained from Lemma 6 is such that jH n j " and thus
". Hence x
4.3. Properties
LEMMA 7. If A is a closed set, C a connected component of A and
H an internal hole of A, then, for any x in H,
G:
Proof. The right hand side term of this equality, H 0 , is obviously
contained in H. Suppose this inclusion is strict. The set H being connected
and H 0 being open (as a union of open sets), H 0 is not closed
in H, so that H \ @H 0 6= ;. Let y 2 H \ @H 0 and let U be a connected
neighborhood of y. We claim that U\A 6= ;. Otherwise, we would have
U
A, and as U meets some
n A with U [ G 0 being
connected, contradicting G
A). We conclude that y 2 A.
Then there is some G
Since G is an open set, it contains some neighborhood of y 2 @H 0 , in
particular, it meets some G
n A)) such that x 2 G 00 H.
Thus G and G 00 are nested, and, as G 6 H 0 , we have that G 00 G.
Thus, x 2 G and G H, implying that G H 0 , which contradicts our
assumption since y 2 G. We conclude that H
LEMMA 8. Let A; B
be closed sets.
Proof. Suppose that A and B are disjoint. Taking components of A
and B instead of A and B, we may assume that A and B are connected.
The result being obvious if G " A, or G " B, is empty, we may also assume
none of them is empty. Then Sat(A) and Sat(B) are either nested or
disjoint. If they are disjoint, since G " A Sat(A) and G " B Sat(B),
(5a) is obvious. If they are nested, without loss of generality, we may
assume that Sat(A) Sat(B). Then Sat(A) is inside a hole H of B.
We get that jHj jSatAj ". Since Sat(A) is closed and H is open,
is an open and nonempty set, thus, it has positive measure.
This yields jHj > ", and therefore Sat(A) \
(5a).
Now, we suppose that A [
Let x 62 G " A. In particular,
this implies that Sat(C)
for any C = cc(A). For any such set C,
n SatC is connected, and also
is
n SatC, which is a compact set.
By Lindelof's theorem, the intersection of a family of continua can be
written as the intersection of a sequence of them. We can thus nd a
sequence (C n ) of connected components of A such that
V. Caselles and P. Monasse
Let D
Clearly, (D n ) is a nonincreasing sequence of
continua, and we may write
which is thus a continuum D. We observe that D B. Indeed, if
y is in some connected component C of A. Since
yn Sat(C), obviously we have that y 2 @Sat(C) @C. Now, we
prove that any connected neighborhood U of y meets a point
of
A.
Otherwise, C[U A would be connected, which, in turn, implies that
U C.
Since
n A B, we conclude that y B, a contradiction
with our choice of y. This proves that D B. We also observe that
, and, thus, jSat(D)j ".
one of the following cases happens:
(iii) There is some such that x 2 Sat(C) and jSat(C)j ".
Suppose that (i) holds. Then x 2 D. Since jSat(D)j ", we have
Suppose that (ii) holds. Let D be the set dened above. If x 2 D
we have again that x 2 D G " D G " B. If x 62 D, then x is in a hole
H of D. Thanks to Lemma 7, we can write
G:
Thanks to Lindelof's theorem, this union of open sets can be written
as the union of a sequence G n of these sets. For each n,
Since
that G 0
and we obtain that jG n j jSat(C)j < ". We conclude that
sup n2IN jG n j ". Thus x 2 H G " D G " B.
Finally, we suppose that (iii) holds. Let
C:
We claim that being a connected component of A.
If there is a nite number of sets E in the intersection above, the result
is just due to the fact that they are nested. If there is an innite number
of such sets E, we can write their intersection as the intersection of a
sequence E n of them, thanks to Lindelof's theorem. By taking T n
instead of E n , we may also assume that this sequence is nonincreasing.
The set T is therefore a continuum and jT
connected neighborhood of y. Since U 6 T , so
there is some n such that U n E n 6= ;. Since U is connected and meets
its complement, U also meets @E n @A A. We obtain that
A. This shows that @T A. On the other hand, we observe
that the complement of T is connected, being the union of an increasing
sequence of connected sets. Thus @T can be written as the intersection
of two continua, thus, it is connected,
since
is unicoherent. Thus,
there is some T such that @T T 0 , and this implies that
jEj ", we have that T 0 \E 6= ;, and, therefore, sat(T 0 ) E. It follows
that Sat(T 0 ) T. We have, thus, the equality
Now, observe that x must be in a hole H of T 0 and we must have
jHj > ", since otherwise x A. Using Lemma 7, we write
G;
and, as above, we may write the above union as the union of a non-decreasing
sequence of such sets G n . Thus there is some n such that
". We write G
n . If x is in a hole H 0 of
contained in a connected component K of A and
". In this case, we
also have that x
n . Since G 0
n is contained in some connected
component K 0 of B, we get that x
THEOREM 3. Restricted to continuous functions, ~
G " is selfdual.
Proof. We shall prove that condition 3 of Denition 1 holds.
According to (5a), we may write for any 2 IR,
and, since G " [u
n ], by taking the intersection
over all n, we get
Therefore
22 V. Caselles and P. Monasse
let x be such that for all n > 0, x
Due to (5b),
we have that x 2
This proves that
and, thus,
By taking the complement of each part, we get:
which proves that
and, actually, we have the equality of both sets.
THEOREM 4. ~
semicontinuous functions into upper
semicontinuous functions and continuous functions to continuous functions
Proof. We have to show that the image of a compact set K is a
compact set. Let sequence of points of G " K converging to
x. We shall prove that x 2 G " K.
As shown by Lemma 5, x n belongs to some G " K n , where K
cc(K). If the family fK n ; n 2 INg is nite, we can extract a subsequence
of belonging to some G " K n 0
, which is a closed set since its
complement is a union of holes of K n 0
, and the holes are open sets.
Thus, x
. We may now assume that fK n ; n 2 INg is innite,
and, maybe after extraction of a subsequence, that Km \ K
any m 6= n.
We have that Sat(G " K n only a nite number of
these saturations are two by two disjoint, since each of them has measure
at least ". Thus, after extraction of a subsequence, if necessary, we
may assume that they all intersect, so that they form either a decreasing
or an increasing sequence.
If the sequence (Sat(K n )) n2IN is decreasing, then their intersection
is a set Sat(K 0 ), cc(K). This can be shown as in Lemma 8. Then
we have that jSat(K 0
(otherwise, we would have K su-ciently large n), and
contained in a hole of K n for
any n, thus x n 62 Sat(K 0 ) for any n. Since x 2 Sat(K 0 ) we conclude that
the desired result.
Let us assume that (Sat(K n )) n2IN is increasing. Since the lim inf of
the sequence K n is nonempty (it contains x), its lim sup is a continuum
C, according to Zoretti's theorem. Since K is compact, it follows that
C). We observe that x 2 K 0 . We shall prove
that jSat(K 0 )j ", and more precisely that all Sat(K n ), which have
measure ", are in the same internal hole of K 0 . This result implies
If K n \K 0 6= ;, then K since both are connected components
of K. Since we are assuming that the sets K n are two by two disjoint,
this cannot happen twice. Thus we may assume that K n \ K
for any n. The sequence (Sat(K n being increasing, all K n are in the
same hole of K 0 . Suppose that this hole is the external hole H of K 0 .
Since H is open, there is a continuum L joining p 1 and an arbitrary
point y 0 of K 0 . Since K 0 is in an internal hole of K 1 , there is some
. In this manner, we can construct a sequence (y n ) n2N such
that y n 2 L\K n for all n. L being compact, some subsequence of (y n )
converges to a point y 2 L\C. It follows that K 0 \L 6= ;, contrary to
the assumption that L was in a hole of K 0 . This proves our claim.
Applying the preceding result, if u is continuous, ~
" u is upper semi-
continuous. Since u is also upper semicontinuous, ~
" u is
also upper semicontinuous, hence ~
" u is lower semicontinuous. Thus,
~
" u is continuous.
PROPOSITION 8. For " 0 ", ~
Therefore ~
G " is
idempotent.
Proof. The conclusion that ~
G " is idempotent derives from the previous
statement by taking "
The result amounts to show that for any , G " 0 [u
We distinguish three families among the connected components
I
V. Caselles and P. Monasse
We observe that I . Thanks to Lemma 5, we may write
internal hole of C
internal hole of C
the measure of
is
For the same reason, if
In conclusion, this yields
The following properties are an easy consequence of the denition
of ~
LEMMA 9. The operator ~
" satises the following properties
(i) If u v, then ~
function u and any 2 IR.
LEMMA 10. Let u be an upper semicontinuous function and let
Proof. Let us x " < -. Since [ ~
it will be
su-cient to prove that G " [v almost all 2 IR. Let
denote the family of connected components of
K and H(K) the family of internal holes of K. We observe that
C2C(K);jSat(C)j"
internal hole of C; jHj > "g:
Since, by Proposition 6, the connected components of K have measure
-, any C 2 C(K) satises jSat(C)j ". Similarly, since any internal
hole of K contains a connected component of [v < ], it has also
measure - > ". Hence, the family of sets H 2 H(K); jHj " is
empty. We conclude that G "
PROPOSITION 9. Let u 2
0+.
Proof. Using the above Lemma, we have
~
for any "; - > 0 such that " < -. By Proposition 5, given > 0, there
is some - 0 > 0 such that
This implies that
~
u) ~
and, therefore,
u) ~
Now, we choose " < - 0 and we obtain that
u) ~
2:
The proposition follows.
4.4. Relations with connected operators
We want to compare the grain lter described above with the notion of
grain operators as dened in [8]. To x ideas, we shall work
in
with
the classical connectivity. Thus, we denote by C the family of connected
sets
of
. A grain criterion is a mapping c : C ! f0; 1g. Given two grain
criteria, f for the foreground and b for the background, the associated
grain operator f;b is dened by
or
In [8], Heijmans characterizes when the grain operators are selfdual and
when are increasing. Indeed, he proves that f;b is self-dual if and only
b. He also proves that f;b is increasing if and only if f and b are
increasing and the following condition holds
for any X
and any x.
We shall say that a grain criterion c : C ! f0; 1g is upper semicontinuous
on compact sets if c(\ n K n decreasing
sequence of continua K n , n 2 IN .
26 V. Caselles and P. Monasse
PROPOSITION 10. Let be a self-dual and increasing grain oper-
ator
in
associated to the grain criterion c. Assume that c upper
semicontinuous.
.
Proof. We have that either
or
for some point x. In the rst case, we deduce that
any nonempty subset X
of
and, therefore, we have that
for any ; 6= X
Since also we have that
any X
In the second case, using the upper semicontinuity of c
we have that c(B(x;
choose observe that c(cc(X [fxg;
which contradicts (6).
The above proposition says that there are no nontrivial translation
invariant, increasing and self-dual grain operators. Other types of connected
operators called
attenings and levelings were introduced by F.
Meyer in [15],[16] and further studied in [29]. In particular, Serra proves
that there exist increasing and selfdual
attenings and levelings based
on markers [29].
Finally, let us prove that the grain lter we have introduced above
corresponds to a universal criterion to dene increasing and self-dual
lters. Let us recall the denition of connected operator [25], [30], [8].
For that, given a set X
, we denote by P (X) the partition
of
constituted by the cc(X) and cc(X c ). The family of all subsets of X
will be denoted by
P(DEFINITION 3. An operator :
P(
P(
connected if the
partition P ( (X)) is coarser than P (X) for every set X
.
Given a connected operator
P(
P(
we shall say that
(i) is increasing if (X) (Y ) for any X Y
(ii) acts additively on connected components if
when X i is the family of connected components of X.
(iii) is self-dual if (IR N n any open or closed
set X
(iv) is bounded if p 1
It is not di-cult to see that if is a connected operator which is
increasing and self-dual then it induces an increasing and self-dual lter
on continuous functions.
PROPOSITION 11. Let :
P(
P(
) be a connected operator.
Suppose that is increasing, self-dual, bounded and acts additively on
connected components. Let Ker := fX
be an open or closed connected set. Then, if Sat(X) 62 Ker , we have
where H(X) denotes the family of internal holes of X.
Proof. Since is self-dual, without loss of generality, we may assume
that p 1 62 X. Since is a connected operator, if Z is simply
connected, then (Z) must be one of the sets f;;
g. Since
is bounded we must have that either In
particular, either Sat(X). In the rst case, we
have that Sat(X) 2 Ker . In the second case, using the additivity of
on connected components and the observation at the beginning of the
proof, we have
Now, using the self-duality of we have
(X)
Obviously, if is increasing, then Ker is an ideal of sets, i.e., if
Y X and X 2 Ker , then Y 2 Ker .
4.5. Interpretation
Similar remarks to those for the extrema lters can be made concerning
the shapes of ~
. The shapes of ~
" u are the shapes of u of su-cient
measure. ~
" corresponds to a pruning of the tree of shapes of u.
5. Experiment
Theoretically, the lters M
" and G " are dierent. This
is illustrated in Figure 1. The second row shows the ltered images
3 u; they are all dierent, stressing that their
respective notions of grains are dierent.
The dierence appears in presence of holes. Concerning natural im-
ages, the dierence would be the most apparent on certain images of
28 V. Caselles and P. Monasse11202
Figure
1. Top-left: original image u. The three constant regions are supposed of area
2. Bottom left: G3u. Middle column: M 3 u and M
3 u and
3 u.
Figure
2. Texture image of a carpet, size 254 173.
textures, for which the nestedness of shapes would be important. Figure
2 shows a complex texture, and Figure 3 the image ltered according
to the three lters of parameter pixels. Whereas they are actually
dierent, they are visually equivalent, and distinguishing them requires
some eort. We can explain that by the fact that connected components
of level sets having a hole of greater area than themselves are scarce.
In other words, the situation illustrated by Figure 1 is not frequent.
Figure
3. Three grain lters applied to the image u of Figure 2. Left column: G30u,
Acknowledgements
We acknowledge partial support by the TMR European project \Vis-
cosity solutions and their applications", reference FMRX-CT98-0234
and the CNRS through a PICS project. The rst author acknowledges
partial support by the PNPGC project, reference BFM2000-0962-C02-
V. Caselles and P. Monasse
--R
'Image Iterative Smoothing and P.D.E.'s'.
Random Sets and Integral Geometry.
Mathematical Morphology and Its Application to Signal and Image Processing
Mathematical Morphology and Its Application to Signal and Image Processing
Image Analysis and Mathematical Morphology.
--TR
Introduction to mathematical morphology
Watersheds in Digital Spaces
Affine invariant scale-space
Morphological multiscale segmentation for image coding
Self-dual morphological operators and filters
From connected operators to levelings
The levelings
Fundamenta morphologicae mathematicae
A Compact and Multiscale Image Model Based on Level Sets
--CTR
Renato Keshet, Shape-Tree Semilattice, Journal of Mathematical Imaging and Vision, v.22 n.2-3, p.309-331, May 2005
Renato Keshet, Adjacency lattices and shape-tree semilattices, Image and Vision Computing, v.25 n.4, p.436-446, April, 2007 | connected sets;extrema filters;connected operators;grain filters;mathematical morphology |
607707 | Algebraic geometrical methods for hierarchical learning machines. | Hierarchical learning machines such as layered perceptrons, radial basis functions, Gaussian mixtures are non-identifiable learning machines, whose Fisher information matrices are not positive definite. This fact shows that conventional statistical asymptotic theory cannot be applied to neural network learning theory, for example either the Bayesian a posteriori probability distribution does not converge to the Gaussian distribution, or the generalization error is not in proportion to the number of parameters. The purpose of this paper is to overcome this problem and to clarify the relation between the learning curve of a hierarchical learning machine and the algebraic geometrical structure of the parameter space. We establish an algorithm to calculate the Bayesian stochastic complexity based on blowing-up technology in algebraic geometry and prove that the Bayesian generalization error of a hierarchical learning machine is smaller than that of a regular statistical model, even if the true distribution is not contained in the parametric model. | Introduction
Learning in artificial neural networks can be understood as statistical estimation
of an unknown probability distribution based on empirical samples (White, 1989;
Watanabe & Fukumizu, 1995). Let p(y|x, w) be a conditional probability density
function which represents a probabilistic inference of an artificial neural network,
where x is an input and y is an output. The parameter w, which consists of a lot
of weights and biases, is optimized so that the inference p(y|x, w) approximates the
true conditional probability density from which training samples are taken.
Let us reconsider a basic property of a homogeneous and hierarchical learning
machine. If the mapping from a parameter w to the conditional probability density
p(y|x, w) is one-to-one, then the model is called identifiable. If otherwise, then it is
called non-identifiable. In other words, a model is identifiable if and only if its parameter
is uniquely determined from its behavior. The standard asymptotic theory
in mathematical statistics requires that a given model should be identifiable. For
example, identifiablity is a necessary condition to ensure that both the distribution
of the maximum likelihood estimator and the Bayesian a posteriori probability density
function converge to the normal distribution if the number of training samples
tends to infinity (Cramer, 1949). When we approximate the likelihood function
by a quadratic form of the parameter and select the optimal model using information
criteria such as AIC, BIC, and MDL, we implicitly assume that the model is
identifiable.
However, many kinds of artificial neural networks such as layered perceptrons, radial
basis functions, Boltzmann machines, and gaussian mixtures are non-identifiable,
hence either their statistical property is not yet clarified or conventional statistical
design methods can not be applied. In fact, a failure of likelihood asymptotics for
normal mixtures was shown from the viewpoint of testing hypothesis in statistics
(Hartigan, 1985). In researches of artificial neural networks, it was pointed out
that AIC does not correspond to the generalization error by the maximum likelihood
method (Hagiwara, 1993), since the Fisher information matrix is degenerate
if the parameter represents the smaller model (Fukumizu, 1996). The asymptotic
distribution of the maximum likelihood estimator of a non-identifiable model was
analyzed based on the theorem that the empirical likelihood function converges to
the gaussian process if it satisfies Donsker's condition (Dacunha-Castelle & Gassiat,
1997). It was proven that the generalization error by the Bayesian estimation is far
smaller than the number of parameters divided by the number of training samples
(Watanabe, 1997; Watanabe, 1998). When the parameter space is conic and sym-
metric, the generalization error of the maximum likelihood method is di#erent from
that of a regular statistical model (Amari & Ozeki, 2000). If the log likelihood function
is analytic for the parameter and if the set of parameters is compact, then the
generalization error by the maximum likelihood method is bounded by the constant
divided by the number of training samples (Watanabe, 2001b).
Let us illustrate the problem caused by non-identifiability of layered learning
machines. If p(y|x, w) be a three-layer perceptron with K hidden units and if w 0
is a parameter such that p(y|x, w 0 ) is equal to the machine with K 0 hidden units
then the set of true parameters
consists of several sub-manifolds in the parameter space. Moreover, the Fisher
information matrix,
log p(y|x, w)
log p(y|x, w)p(y|x, w)q(x)dxdy,
where q(x) is the probability density function on the input space, is positive semi-definite
but not positive definite, and its rank, rank I(w), depends on the parameter
This fact indicates that artificial neural networks have many singular points in
the parameter space (Figure 1). A typical example is shown in Example.2 in section
3. By the same reason, almost all homogenous and hierarchical learning machines
such as a Boltzmann machine, a gaussian mixture, and a competitive neural network
have singularities in their parameter spaces, resulting that we have no mathematical
foundation to analyze their learning.
In the previous paper (Watanabe, 1999b; Watanabe, 2000; Watanabe, 2001a), in
order to overcome such a problem, we proved the basic mathematical relation between
the algebraic geometrical structure of singularities in the parameter space and
the asymptotic behavior of the learning curve, and constructed a general formula to
calculate the asymptotic form of the Bayesian generalization error using resolution
of singularities, based on the assumption that the true distribution is contained in
the parametric model.
In this paper, we consider a three-layer perceptron in the case when the true
probability density is not contained in the parametric model, and clarify how singularities
in the parameter space a#ect learning in Bayesian estimation. By employing
an algebraic geometrical method, we show the following facts.
(1) The learning curve is strongly a#ected by singularities, since the statistical estimation
error depends on the estimated parameter.
(2) The learning e#ciency can be evaluated by using the blowing-up technology in
algebraic geometry.
(3) The generalization error is made smaller by singularities, if the Bayesian estimation
is applied.
These results clarify the reason why the Bayesian estimation is useful in practical
applications of neural networks, and demonstrate a possibility that algebraic geometry
plays an important role in learning theory of hierarchical learning machines,
just same as di#erential geometry did in that of regular statistical models (Amari,
1985).
This paper consists of 7 sections. In section 2, the general framework of Bayesian
estimation is formulated. In section 3, we analyze a parametric case when the
true probability density function is contained in the learning model, and derive the
asymptotic expansion of the stochastic complexity using resolution of singularities.
In section 4, we also study a non-parametric case when the true probability density
is not contained, and clarify the e#ect of singularities in the parameter space. In
section 5, the problem of the asymptotic expansion of the generalization error is
considered. Finally, section 6 and 7 are devoted to discussion and conclusion.
Bayesian Framework
In this section, we formulate the standard framework of Bayesian estimation and
Bayesian stochastic complexity (Schwarz 1974; Akaike, 1980; Levin, Tishby, & Solla,
1990; Mackay, 1992; Amari, Fujita, & Shinomoto, 1992; Amari & Murata, 1993).
Let p(y|x, w) be a probability density function of a learning machine, where an
input x, an output y, and a parameter w are M , N , and d dimensional vectors,
respectively. Let q(y|x)q(x) be a true probability density function on the input
and out space, from which training samples {(x i , y i are independently
taken. In this paper, we mainly consider the Bayesian framework, hence the
estimated probability density # n (w) on the parameter space is defined by
exp(-nH n (w))#(w),
log
where Z n is the normalizing constant, #(w) is an arbitrary fixed probability density
function on the parameter space called an a priori distribution, and H n (w) is the
empirical Kullback distance. Note that the a posteriori distribution # n (w) does
not depend on {q(y i |x i constant function of w.
Hence it can be written in the other form,
The inference p n (y|x) of the trained machine for a new input x is defined by the
average conditional probability density function,
The generalization error G(n) is defined by the Kullback distance of p n (y|x) from
q(y|x),
{ # q(y|x) log
q(x)dxdy}, (1)
represents the expectation value overall sets of training samples. One
of the most important purposes in learning theory is to clarify the behavior of the
generalization error when the number of training samples are su#ciently large.
It is well known (Levin, Tishby, Solla, 1990; Amari, 1993; Amari, Murata, 1993)
that the generalization error G(n) is equal to the increase of the stochastic complexity
F (n),
for an arbitrary natural number n, where F (n) is defined by
The stochastic complexity F (n) and its generalized concepts, which are sometimes
called the free energy, the Bayesian factor, or the logarithm of the evidence, can
be seen in statistics, information theory, learning theory, and mathematical physics
(Schwarz, 1974; Akaike, 1980; Rissanen, 1986; Mackay, 1992; Opper & Haussler,
1995; Meir & Merhav, 1995 ; Haussler & Opper, 1997; Yamanishi, 1998). For
example, both Bayesian model selection and hyperparatemeter optimization are
often carried out by minimization of the stochastic complexity before averaging.
They are called BIC and ABIC, which are important in practical applications.
The stochastic complexity satisfies two basic inequalities. Firstly, we define H(w)
and F (n) respectively by
q(x)dxdy,
Note that H(w) is called the Kullback information. Then, by applying Jensen's
inequality,
holds for an arbitrary natural number n (Opper & Haussler, 1995; Watanabe, 2001a).
Secondly, we use notations F (#, n) = F (n) and F (#, n) = F (n) which explicitly
show the a priori probability density #(w). Then F (#, n) and F (#, n) can be understood
as a generalized stochastic complexity for a case when #(w) is a non-negative
function. If #(w) and #(w) satisfy
then it immediately follows that
Therefore, the restriction of the integrated region of the parameter space makes the
stochastic complexity not smaller. For example, we define
exp(-nH(w))#(w)dw, (7)
with su#ciently small # > 0, then
These two inequalities eq.(4) and eq.(8) give upper bounds of the stochastic com-
plexity. On the other hand, if the support of #(w) is compact, then a lower bound
is proven
Moreover, if the learning machine contains the true distribution, then
holds (Watanabe, 1999b; Watanabe, 2001a).
In this paper, based on algebraic geometrical methods, we prove rigorously the
upper bounds of F (n) such as
are constants and o(log n) is a function of n which satisfies o(log n)/ log n #
Mathematically speaking, although the generalization error G(n) is
equal to F (n natural number n, we can not derive the asymptotic
expansion of G(n). However, in section 5, we show that, if G(n) has some
asymptotic expansion, then it should satisfy the inequality
for su#ciently large n, from eq.(11). The main results of this paper are the upper
bounds of the stochastic complexity, however, we also discuss the behavior of the
generalization errors based on eq.(12).
3 A Parametric Case
In this section, we consider a parametric case when the true probability distribution
q(y|x)q(x) is contained in the learning machine p(y|x, w)q(x), and show the relation
between the algebraic geometrical structure of the machine and the asymptotic form
of the stochastic complexity.
3.1 Algebraic Geometry of Neural Networks
In this subsection, we briefly summarize the essential result of the previous paper.
For the mathematical proofs of this subsection, see (Watanabe, 1999b; Watanabe,
2001a). Strictly speaking, we need assumptions that log p(y|x, w) is an analytic
function of w, and that it can be analytically continued to a holomorphic function
of w whose associated convergence radii is positive uniformly for arbitrary (x, y)
that satisfies q(y|x)q(x) > 0 (Watanabe, 2000; Watanabe, 2001a). In this paper, we
apply the result of the previous paper to the three-layer perceptron.
If a three-layer perceptron is redundant to approximate the true distribution,
then the set of true parameters {w; is a union of several sub-manifolds
in the parameter space. In general, the set of all zero points of an analytic function
is called an analytic set. If the analytic function H(w) is a polynomial, then the set
is called an algebraic variety. It is well known that an analytic set and an algebraic
variety have complicated singularities in general.
We introduce a state density function v(t)
where #(t) is Dirac's delta function and # > 0 is a su#ciently small constant. By
definition, if t < 0 or t > #, then using v(t), F # (n) is rewritten as
exp(-nH(w))#(w)dw
dt
.
Hence, if v(t) has an asymptotic expansion for t # 0, then F # (n) (n #) has an
asymptotic expansion for n #.
In order to examine v(t), we introduce a kind of the zeta function J(z) (Sato
of the Kullback information H(w) and the a priori probability
density #(w), which is a function of one complex variable z,
H(w) z #(w)dw (14)
Then J(z) is an analytic function of z in the region Re(z) > 0. It is well known in
the theory of distributions and hyperfunctions that, if H(w) is an analytic function
of w, then J(z) can be analytically continued to a meromorphic function on the
entire complex plane and its poles are on the negative part of the real axis (Atiyah,
1970; Bernstein, 1972; Sato & Shintani, 1974; Bj-ork, 1979). Moreover, the poles of
J(z) are rational numbers (Kashiwara, 1976). Let -# 1
be the largest pole and its order of J(z), respectively. Note that eq.(15) shows J(z)
(z # C) is the Mellin transform of v(t). Using the inverse Mellin transform, we can
show that v(t) satisfies
where c 0 > 0 is a positive constant. By eq.(13), F # (n) has an asymptotic expansion,
where O(1) is a bounded function of n. Hence, by eq.(8),
Moreover, if the support of #(w) is a compact set, by eq.(9), we obtain an asymptotic
expansion of F (n),
We have the first theorem.
Theorem 1 (Watanabe, 1999b; Watanabe, 2001a) Assume that the support of #(w)
is a compact set. The stochastic complexity F (n) has an asymptotic expansion,
are respectively the largest pole and its order of the function that
is analytically continued from
H(w) z #(w)dw,
where H(w) is the Kullback information and #(w) is the a priori probability density
function.
Remark that, if the support of #(w) is not compact, then Theorem 1 gives an upper
bound of F (n).
The important constants # 1 and m 1 can be calculated by an algebraic geometrical
method. We define the set of parameters W # by
It is proven by Hironaka's resolution theorem (Hironaka, 1964 ; Atiyah, 1970) that
there exist both a manifold U and a resolution map
d
in an arbitrary neighborhood of an arbitrary u # U that satisfies
where a(u) > 0 is a strictly positive function and {k i } are non-negative even integers
Figure
2). Let
be a decomposition of W # into a finite union of suitable neighborhoods W # , where
By applying the resolution theorem to the function J(z),
H(w) z #(w)dw
H(w) z #(w)dw
is given by recursive blowing-ups, the Jacobian |g # (u)|
is a direct product local variables u 1 ,
d
where c(u) is a positive analytic function and {h j } are non-negative integers. In a
neighborhood U # , a(u) and #(g(u)) can be set as constant functions in calculation
of the poles of J(z), because we can take each U # small enough. Hence we can set
loss of generality. Then,
d
where both k (#)
j depend on the neighborhood U # . We find that J(z) has
poles {-(h (#)
j }, which are rational numbers on the negative part of the
real axis.
Since a resolution map g(u) can be found by using finite recursive procedures of
blowing-ups, # 1 and m 1 can be found algorithmically. It is also proven that # 1 # d/2
if {w; #(w) > 0, #, and that m 1 # d.
Theorem 2 (Watanabe, 1999b; Watanabe, 2001a) The largest pole -# 1 and its
of the function J(z) can be algorithmically calculated by Hironaka's
resolution theorem. Moreover, # 1 is a rational number and m 1 is a natural number,
and if {w; #(w) > 0,
where d is the dimension of parameter.
Note that, if the learning machine is a regular statistical model, then always #
Also note that, if Je#reys' prior is employed in neural network learning,
which is equal to zero at singularities, the assumption {w; #(w) > 0,
is not satisfied, and then both # even if the Fisher metric
is degenerate (Watanabe, 2001c).
Example.1 (Regular Model) Let us consider a regular statistical model
exp(-2
with the set of parameters Assume that the true
distribution is
exp(-2
and the a priori distribution is the uniform distribution on W . Then,
For a subset S # W , we define
Then
We introduce a mapping
Then
=2 z
has a pole at z = -1. We can show JW 2
(z) has the same pole just the same way as
. Hence # resulting in F (n) log n. This coincides with the
well known result of the Bayesian asymptotic theory of regular statistical models.
The mapping in eq.(17) is a typical example of a blowing-up.
Example.2 (Non-identifiable model) Let us consider a learning machine,
p(y|x, a, b, c) =# 2#
exp(-2
Assume that the true distribution is same as eq.(16), and that
the a priori probability distribution is the uniform one on the set
Then, the Kullback information is
Let us define two sets of parameters,
|
By using blowing-ups recursively, we find a map which is defined by
By using this transform, we obtain
Therefore,
H(w) z dw
=2 z
The largest pole of JW 1 (z) is -3/4 and its order is one. It is also shown that JW\W 1 (z)
have largest pole -3/4 with order one. Hence # resulting that
log n +O(1).
3.2 Application to Layered Perceptron
We apply the theory in the foregoing subsection to the three-layer perceptron. A
three-layer perceptron with the parameter defined by
a k #(b k - x
where y, f(x, w), and a h are N dimensional vectors, x and b h are M dimensional
vectors, c h is a real number, and and K are the
numbers of input units, output units, and hidden units. In this paper, we consider
a machine which does not estimate the standard deviation s > 0 (s is a constant).
We assume that the true distribution is
That is to say, the true regression function is This is a special case, but
analysis of this case is important in the following section where the true regression
function is not contained in the model.
Theorem 3 Assume that the learning machine given by eq.(18) and eq.(19) is
trained using samples independently taken from the distribution, eq.(20). If the a
priori distribution satisfies #(w) > 0 in the neighborhood of the origin
(Proof of Theorem We use notations,
a
Then the Kullback information is
(b, c) a hp a kp ,
where
Our purpose is to find the pole of the function
where
Let us apply the blowing-up technique to the Kullback information H(a, b, c).
Firstly, we introduce a mapping
which is defined by
a
a
Let u # be the variables of u except u 11 , in other words,
where
and the Jacobian |g # (u)| of the mapping g is
We define a set of paramaters for # > 0
By the assumption, there exists # > 0 such that
In order to obtain an upper bound of the stochastic complexity, we can restrict the
integrated region of the parameter space, by using eq.(5) and (6).
By the assumption #(w) > 0 in g(U(#)). In calculation of the pole of J(z), we can
assume is a constant) in g(U(#)).
du # db dc
The pole of the function #
respectively the largest poles of J(z) and
Then, since H 1 does not have zero point in the interval (-# 1 , #).
larger than -# 1 , then z = -NK/2 is a pole of J(z). If otherwise,
then J(z) has a larger pole than -NK/2. Hence # 1 # NK/2.
Secondly, we consider another blowing-up g,
which is defined by
Then, just the same method as the first half, there exists an analytic function
which implies
Therefore
By combing the above two results, the largest pole -# 1 of the J(z) satisfies the
inequality,
which completes the proof of Theorem 3. (End of Proof).
By Theorem 1,
Moreover, if G(n) has an asymptotic expansion (see section 5), we obtain an inequality
of the generalization error,
On the other hand, it is well known that the largest pole of a regular statistical
model is equal to -d/2, where d is the number of parameters. When a three-layer
perceptorn with 100 input units, 10 hidden units, and 1 output unit is employed, then
the regular statistical models with the same number of parameters
has It should be emphasized that the generalization error of
the hierarchical learning machine is far smaller than that of the regular statistical
models, if we use the Bayesian estimation.
When we adopt the normal distribution as the a priori probability density, we
have shown the same result as Theorem 3 by a direct calculation (Watanabe, 1999a).
However, Theorem 3 shows systematically that the same result holds for an arbitrary
a priori distribution. Moreover, it is easy to generalize the above result to the case
when the learning machine has M input units, K 1 first hidden units, K 2 second
hidden units, ., K p pth hidden units, and N output units. We assume that hidden
units and output units have bias parameters. Then by using same blowing-ups, we
can generalize the proof of Theorem 3,
Of course, this result holds only when the true regression function is the special
case, However, in the following section, we show that this result is necessary
to obtain a bound for a general regression function.
4 A Non-parametric Case
In the previous section, we have studied a case when the true probability distribution
is contained in the parametric model. In this section, we consider a non-parametric
case when the true distribution is not contained in the parametric models, which is
illustrated in Figure 3.
Let w 0 be the parameter that minimizes H(w), which is a point C in Figure
3. Our main purpose is to clarify the e#ect of singular points such as A and B in
Figure
3 which are not contained in the neighborhood of w 0 . Let us consider a case
when a three-layer perceptron given by eq.(18) and eq.(19) is trained using samples
independently taken from the true probability distribution,
where g(x) is the true regression function and q(x) is the true probability distribution
on the input space. Let E(k) be the minimum function approximation error using
a three-layer perceptron with k hidden units,
Here we assume that, for each 1 # k # K, there exists a parameter w that attains
the minimum value.
Theorem 4 Assume that the learning machine given by eq.(18) and eq.(19) is
trained using samples independently taken from the distribution of eq.(21). If the a
priori distribution satisfies #(w) > 0 for an arbitrary w, then
{
(D
where
(Proof of Theorem 4) By Jensen's inequality eq.(4), we have
where H(w) is the Kullback distance,
be natural numbers which satisfy both 0 # k 1 # K and
We divide the parameter
Also let # 1 and # 2 be real numbers which satisfy both # 1 > 1 and
Then, for arbitrary u, v # R N ,
Therefore, for arbitrary (x, w),
Hence we have an inequality,
where we use definitions,
As F (n) is an increasing function of H(w),
where
are some functions which satisfy
Here we can choose both # 1 (w 1 ) and # 2 (w 2 ) which are compact support functions.
Firstly, we evaluate F 1 (n). Let w # 1 be the parameter that minimizes H 1 (w 1 ).
Then, by eq.(22) and Theorem 2,
is the number of parameters in the three-layer perceptron
with k 1 hidden units.
Secondly, by applying Theorem 3 to F 2 (n),
By combining eq.(23) with eq.(24), and by taking # 1 su#ciently close 1, we obtain
{
for an arbitrary given
we Theorem 4. (End of Proof).
Based on Theorem 4, if G(n) has an asymptotic expansion (see section 5), then G(n)
should satisfy the inequalities
for n > n 0 with a su#ciently large n 0 . Hence
{ E(k)
for n > n 0 with a su#ciently large n 0 . Figure 4 illustrates several learning curves
corresponding to k (0 # k # K). The generalization error G(n) is smaller than
every curve.
It is well known (Barron, 1994; Murata, 1996) that, if g(x) belongs to some kind
of function space, then
for su#ciently large k, where C(g) is a positive constant determined by the true
regression function g(x). Then,
{
ASYMPTOTIC PROPERTY OF THE GENERALIZATION ERROR 22
If both n and K are su#ciently large, and if
then, by choosing
The inequality (27) holds if n is su#ciently large. If n is su#ciently large but not
extensively large, then G(n) is bounded by the generalization error of the middle
size model. If n becomes larger, then it is bounded by that of the larger model, and
if n is extensively large, then it is bounded by that of the largest model. A complex
hierarchical learning machine contains a lot of smaller models in its own parameter
space as analytic sets with singularities, and chooses the appropriate model adaptively
for the number of training samples, if Bayesian estimation is applied. Such a
property is caused by the fact that the model is non-identifiable, and its quantitative
e#ect can be evaluated by using algebraic geometry.
5 Asymptotic Property of the Generalization Er-
ror
In this section, let us consider the asymptotic expansion of the generalization error.
By eq.(2), F (n) is equal to the accumulate generalization error,
where G(0) is defined by F (1). Hence, if G(n) has an asymptotic expansion for
#, then F (n) also has the asymptotic expansion. However, even if F (n)
has an asymptotic expansion, G(n) may not have an asymptotic expansion. In the
foregoing sections, we have proved that F (n) satisfies inequalities such as
are constants determined by the singularities and the true distribu-
tion. In order to mathematically derive an inequality of G(n) from eq.(30), we need
an assumption.
ASYMPTOTIC PROPERTY OF THE GENERALIZATION ERROR 23
Assumption (A) Assume that the generalization error G(n) has an asymptotic
expansion
a q s q (n)
where {a q } are real constants, s q (n) > 0 is a positive and non-increasing function
of n which satisfies
Based on this assumption, we have the following lemma.
Lemma 1 If G(n) satisfies the assumption (A) and if eq.(30) holds, then G(n)
satisfies an inequality,
(Proof) By the assumption (A)
which shows a 1 #. If a 1 < #, then eq.(35) holds. If a
ks 2 (k). By eq.(32),eq.(33), and eq.(34), t(k) # or t(k) # C (C > 0).
If t(k) #, then, for arbitrary M > 0, there exists k 0 such that
Hence
which contradicts eq.(36). Hence t(n) # C and a 2 C #. (End of Proof Lemma 1).
In this paper, we have proven the inequalities same as eq.(30) in Theorem 1, 2, 3,
and 4 without assumption (A). Then, we obtain corresponding inequalities same as
if we adopt the assumption (A). In other words, if G(n) has an asymptotic
expansion and if eq.(30) holds, then G(n) should satisfy eq.(35). It is conjectured
that natural learning machines satisfy the assumption (A). A su#cient condition for
the assumption (A) is that F (n) has an asymptotic expansion
R
a
1). For
example, if the learner is
p(y|x, a) =# 2#
exp(-2
where the a priori distribution of a is the standard normal distribution, and if the
true distribution is
}),
then, it is shown by direct calculation that the stochastic complexity has an asymptotic
expansion
Hence G(n) has an asymptotic expansion
c 2+2n
It is expected that, in a general case, G(n) has the same asymptotic expansion as
Assumption (A), however, mathematically speaking, the necessary and su#cient
condition for it is not yet established. This is an important problem in statistics
and learning theory for the future.
6 Discussion
In this section, universal phenomena which can be observed in hierarchical learning
machines.
6.1 Bias and variance at singularities
We consider a covering neighborhood of the parameter space,
where {W (w j )} are the su#ciently small neighborhood of the parameter w j which
The number J in eq.(38) is finite when compact. Then, the upper-bound
of the stochastic complexity can be rewritten as
exp(-H(w))#(w)dw
is the function approximation error of the parameter w j
H(w),
and V (w j ) is the statistical estimation error of the neighborhood of w j ,
(- log n) m(w j )-1
where c 0 > 0 is a constant. The values -#(w j ) and m(w j ) are respectively the
largest pole and its multiplicity of the meromorphic function
Note that B(w j ) and V (w j ) are called the bias and the variance, respectively. In
the Bayesian estimation, the neighborhood of the parameter w j that minimizes
is selected with the largest probability. In regular statistical models, the variance
does not depend on the parameter, in other words, #(w j
for an arbitrary parameter w j , hence the parameter that minimizes the function approximation
error is selected. On the other hand, in hierarchical learning machines,
the variance V (w j ) strongly depends on the parameter w j , and the parameter that
minimizes the sum of the bias and variance is selected. If the number of training
samples is large but not extensively large, parameters among the singular point A
in
Figure
3 that represents a middle size model, is automatically selected, resulting
in the smaller generalization error. As n increases, the larger but not largest model
B is selected. At last, if n becomes extensively large, then the parameter C that
minimizes the bias is selected. This is a universal phenomenon of hierarchical learning
machines, which indicates the essential di#erence between the regular statistical
models and artificial neural networks.
6.2 Neural networks are over-complete basis
Singularities of a hierarchical learning machine originate in the homogeneous structure
of a learning model. A set of functions used in an artificial neural network, for
example, is a set of over-complete basis, in other words, coe#cients
{a(b, c)} in a wavelet type decomposition of a given function g(x),
are not uniquely determined for g(x) (Chui, 1989; Murata, 1996). In practical
applications, the true probability distribution is seldom contained in a parametric
model, however, we adopt a model which almost approximates the true distribution
compared with the fluctuation caused by random samples,
a k #(b k - x
If we have an appropriate number of samples and choose an appropriate learning
model, it is expected that the model is in an almost redundant state, where output
functions of hidden units are almost linearly dependent. We expect that this paper
will be a mathematical foundation to study learning machines in such states.
7 Conclusion
We considered the case when the true distribution is not contained in the parametric
models made of hierarchical learning machines, and showed that the parameters
among singular points are selected by the Bayesian distribution, resulting in the
small generalization error. The quantitative e#ect of the singularities was clarified
based on the resolution of singularities in algebraic geometry. Even if the true
distribution is not contained in the parametric models, singularities strongly a#ect
and improve the learning curves. This is a universal phenomenon of the hierarchical
learning machines, which can be observed in almost all artificial neural networks.
--R
Likelihood and Bayes procedure.
A universal theorem on learning curves.
Four Types of Learning Curves.
Neural Computation
Statistical theory of learning curves under entropic loss.
Resolution of Singularities and Division of Distributions.
Communications of Pure and Applied Mathematics
Approximation and estimation bounds for artificial neural networks.
The analytic continuation of generalized functions with respect to a parameter.
Mathematical methods of statistics
An introduction to Wavelets.
Testing in locally conic models
Generalized functions.
On the problem of applying AIC to determine the structure of a layered feed-forward neural network
A Failure of likelihood asymptotics for normal mixtures.
Mutual information
Resolution of singularities of an algebraic variety over a field of characteristic zero.
A statistical approaches to learning and generalization in layered neural networks.
Bayesian interpolation.
On the stochastic complexity of learning realizable and unrealizable rules.
An integral representation with ridge functions and approximation bounds of three-layered network
Bounds for predictive errors in the statistical mechanics of supervised learning.
Stochastic complexity and modeling.
On zeta functions associated with prehomogeneous vector space.
A optimization method of layered neural networks based on the modified information criterion.
On the essential di
On the generalization error by a layered statistical model with Bayesian estimation.
Algebraic analysis for non-regular learning machines
Neural Computation
Probabilistic design of layered neural networks based on their unified framework.
Learning in artificial neural networks: a statistical prespective.
Neural Computation
A decision-theoretic extension of stochastic complexity and its applications to learning
--TR
Bayesian interpolation
Four types of learning curves
A universal theorem on learning curves
An introduction to wavelets
Statistical theory of learning curves under entropic loss criterion
Approximation and Estimation Bounds for Artificial Neural Networks
On the Stochastic Complexity of Learning Realizable and Unrealizable Rules
A regularity condition of the information matrix of a multilayer perceptron network
An integral representation of functions using three-layered networks and their approximation bounds
Algebraic Analysis for Singular Statistical Estimation
--CTR
Miki Aoyagi , Sumio Watanabe, Stochastic complexities of reduced rank regression in Bayesian estimation, Neural Networks, v.18 n.7, p.924-933, September 2005
Keisuke Yamazaki , Sumio Watanabe, Singularities in mixture models and upper bounds of stochastic complexity, Neural Networks, v.16 n.7, p.1029-1038, September
Sumio Watanabe , Shun-ichi Amari, Learning coefficients of layered models when the true distribution mismatches the singularities, Neural Computation, v.15 n.5, p.1013-1033, May
Shun-Ichi Amari , Hiroyuki Nakahara, Difficulty of Singularity in Population Coding, Neural Computation, v.17 n.4, p.839-858, April 2005
Haikun Wei , Jun Zhang , Florent Cousseau , Tomoko Ozeki , Shun-ichi Amari, Dynamics of learning near singularities in layered networks, Neural Computation, v.20 n.3, p.813-843, March 2008
Shun-Ichi Amari , Hyeyoung Park , Tomoko Ozeki, Singularities Affect Dynamics of Learning in Neuromanifolds, Neural Computation, v.18 n.5, p.1007-1065, May 2006 | resolution of singularities;generalization error;stochastic complexity;asymptotic expansion;algebraic geometry;non-identifiable model |
607896 | A Comparison of Static Analysis and Evolutionary Testing for the Verification of Timing Constraints. | This paper contrasts two methods to verify timing constraints of real-time applications. The method of static analysis predicts the worst-case and best-case execution times of a tasks code by analyzing execution paths and simulating processor characteristics without ever executing the program or requiring the programs input. Evolutionary testing is an iterative testing procedure, which approximates the extreme execution times within several generations. By executing the test object dynamically and measuring the execution times the inputs are guided yielding gradually tighter predictions of the extreme execution times. We examined both approaches on a number of real world examples. The results show that static analysis and evolutionary testing are complementary methods, which together provide upper and lower bounds for both worst-case and best-case execution times. | Introduction
For real-time systems the correct system functionality
depends on their logical correctness as well as on their temporal
correctness. Accordingly, the verification of the temporal
behavior is an important activity for the development
of real-time systems.
The temporal behavior is generally examined by performing
a schedulability analysis to ensure that a task's execution
can finish within specified deadlines. The models for
schedulability analysis are commonly based on the assumption
that the worst-case execution time (WCET) is known.
Specifically, the models assume that the WCET must not
exceed the task's deadline. The best-case execution time
(BCET) may also be used to predict system utilization or
ensure that minimum sampling intervals are met.
Techniques of static analysis (SA) can be used in the
course of system design in order to assess the execution
times of planned tasks as pre-condition for schedulability
analysis. Static timing analysis constitutes an analytical
method to determine bounds on the WCET and BCET of
an application. SA simulates the timing behavior at a cycle
level for hardware concepts such as caches and pipelines
of a given processor. The approach discussed in this paper
uses the method of Static Cache Simulation followed by
Path Analysis within a timing analyzer. Timing estimates
are calculated without knowledge of the input and without
executing the actual application.
Dynamic testing is one of the most important analytical
method for assuring the quality of real-time systems. It
serves for the verification as well as the validation of sys-
tems. An investigation of existing test methods shows that
they mostly concentrate on testing the logical correctness.
There is a lack of support for testing the temporal system
behavior. For that reason, we developed a new approach to
test the temporal behavior of real-time systems: Evolutionary
Testing (ET). ET searches automatically for test data,
which produces extreme execution times in order to check
if the timing constraints specified for the system are vio-
lated. This search is performed by means of evolutionary
computation.
Although SA and ET are usually applied in different
phases of system development, both procedures aim at estimating
the shortest and longest execution times for a sys-
tem, which makes a comparison of these two methods very
interesting. Both approaches are compared in this paper
with the help of several examples.
Chapter 2 offers a general overview of related work on
SA as well as on testing. The third chapter describes the
tool we employ for SA. Afterwards, chapter 4 introduces
ET. Both approaches have been used to determine the minimum
and maximum run times of different systems. Chapter
5 summarizes the obtained results. These are discussed in
chapter 6. It will be seen that a combination of SA and ET
makes a reliable definition of extreme run times possible.
The most important statements are summarized in chapter 7
that also includes a short outlook on future work.
2. Related Work
This section presents an overview of published work in
timing analysis for real-time systems followed by a discussion
of previous work on testing methods for real-time systems
2.1. Timing Analysis of Real-Time Systems
Bounding the WCET of programs is a difficult task. Due
to the undecidability of the halting problem, static WCET
analysis is subject to constraints on the use of programming
language constructs and on the underlying operating
system. For instance, an upper bound on the number of
loop iterations has to be known, indirect calls should not
be used, and memory should not be allocated dynamically
[25]. Often, recursive functions are also not allowed, although
there exist outlines on treating bounded recursion
similar to bounded loops [19]. Recent research in the area
of predicting the WCET of programs has made a number
of advances. Conventional methods for static analysis
have been extended from unoptimized programs on simple
CISC processors to optimized programs on pipelined RISC
processors, and from uncached architectures to instruction
caches [1, 16] and data caches [13, 16, 31].
Today, mainly three fundamental models for static timing
analysis exist. First, a source-level oriented timing
schema propagates times through a tree and handles
pipelined RISC processors with first-level split caches [23,
13]. Second, a constraint-based method models architectural
aspects, including caches, via integer linear programming
[16]. Third, our approach uses data-flow analysis to
model the cache behavior separate from pipeline simula-
tion, which is handled later in a timing analyzer via path
analysis [1, 11, 31]. The first and second approaches use
integrated analysis of caches while our approach uses separate
analysis. This allows us to deal with multi-level memory
hierarchies or unified caches. Another approach using
data-flow analysis to modeling caching originally used the
same categorizations as our approach but a different data-flow
model. Recently, the approach has been generalized to
handle a number of data-flow solutions with differing complexity
and accuracy [9].
In the presence of caches, non-preemptive scheduling
was initially assumed to prevent undeterministic behavior
due to the absence of unpredictable context switch points.
If context switches occurred at arbitrary points (e.g., in a
preemptive system), cache invalidations may occur resulting
in unexpected cache misses when the execution of a
task is resumed later on. Hardware and software approaches
have been proposed to counter this problem but find little
use in practice due to a loss of cache performance when
caches are partitioned [14, 17]. Recently, attempts have
been made to incorporate caching into rate-monotone analysis
and response-time analysis [5, 15], which allows WCET
predictions for non-preemptive systems to be used in the
analysis of preemptively scheduled systems. This approach
seems most promising since the information gathered for
static timing analysis can be utilized within this extended
framework for schedulability analysis.
2.2. Testing Real-Time Systems
Analytical quality assurance plays an important role in
ensuring the reliability and correctness of real-time systems,
since a number of shortcomings still exist within the development
life cycle. In practice, dynamic testing is the
most important analytical method for assuring the quality
of real-time systems. It is the only method that examines
the run-time behavior, based on an execution in the application
environment. For embedded systems, testing typically
consumes 50% of the overall development effort and budget
[8, 29]. It is one of the most complex and time-consuming
activities within the development of real-time systems [12].
In comparison with conventional software systems the examination
of additional requirements like timeliness, simul-
taneity, and predictability make the test costly, and technical
characteristics like the development in host-target environ-
ments, the strong connection with the system environment
or the frequent use of parallelism, distribution, and fault-tolerance
mechanisms complicate the test.
The aim of testing is to find existing errors in a system
and to create confidence in the system's correct behavior by
executing the test object with selected inputs. For testing
real-time systems, the logical system behavior, as well as
the temporal behavior of the systems, need to be examined
thoroughly. An investigation of existing test methods shows
that a number of proven test methods are available for examining
the logical correctness of systems [22, 10]. But there
is a lack of support for testing the temporal behavior of sys-
tems. Only very few works deal with testing the temporal
behavior of real-time systems. Braberman et al. have published
an approach that is based on modeling the system design
with a particular, formally defined SA/SD-RT notation
that is translated into high-level timed Petri nets [4]. Out of
this formal model a symbolic representation of the temporal
behavior is formed, the time reachability tree. Each path
from the root of the tree to its leaves represents a poten-
Source
Files
Compiler
Information
Control Flow
Cache
Simulator
Configurations
I/D-Cache
Interface
User
Analyzer
Timing
User
Timing
Requests
Timing
Predictions
Virtual
Address
Information
Address
Calculator
Addr Info
and Relative
Data Decls
Dependent
Machine
Information
Categorizations
I/D-Caching
Figure
1. Framework for Timing Predictions
tial test case. The tree already becomes very extensive for
small programs so that the number of test cases must be restricted
according to different criteria. Results of any practical
trial testing of this approach are not reported. Mandrioli
et al. developed an interactive tool that enables the
generation of test cases for real-time systems from formal
specifications written in TRIO [18]. The language TRIO
extends classical temporal logic to deal explicitly with time
measures. At present, however, the applicability of the tool
is restricted to small systems whose properties are specified
through simple TRIO formulas. Clarke and Lee [6] as
well as Dasarathy [7] describe further techniques for verifying
timing constraints using timed process algebra or finite-state
machines.
All of these approaches demand the use of formal specification
techniques. Since the use of formal methods has
not yet been generally adopted in industrial practice due
to the great expenditure connected with it and the lack of
maturity of the existing tools, the testing approaches mentioned
have not spread far in industry, particularly since the
suitability of these approaches in many cases remains restricted
to small systems. Accordingly, there are no specialized
methods available at the moment that are suited
for testing the temporal behavior of real-time systems. For
that reason, testers usually go back to conventional test procedures
developed originally for the examination of logical
correctness, e.g., systematic black-box or white-box oriented
test methods. Since the temporal behavior of complex
systems is hard to comprehend and can therefore be examined
only insufficiently with traditional test methods, existing
test procedures must be supplemented by new methods,
which concentrate on determining whether or not the system
violates its specified timing constraints. Therefore we
examine the applicability of evolutionary testing (ET) to test
the temporal behavior of real-time systems.
3. Static Analysis (SA)
Our framework of WCET prediction uses a set of tools
as depicted in Figure 1. An optimizing compiler has been
modified to emit control-flow information, data informa-
tion, and the calling structure of functions in addition to regular
target code generation. Up to now, the research compiler
VPCC/VPO [3] performed this task. We are currently
integrating Gnat/Gcc [26, 28] into this environment.
A static cache simulator uses the control-flow information
and calling structure in conjunction with the cache configuration
to produce instruction and data categorizations,
which describe the caching behavior of each instruction and
data reference. We currently use a separate analyzer for instruction
and data caches since data references require separate
preprocessing via an address calculator. Current work
also includes a single analyzer for unified caches and the
handling of secondary caches [20]. The timing analyzer
uses these categorizations and the control-flow information
to perform a path analysis of the program. It then predicts
the BCET and WCET for portions of the program or the
entire program, depending on user requests.
In the experiments described in section 5, we chose an architecture
without caches for reasons also explained in section
5. Thus, only the portion of the toolset shaded grey in
Figure
was used in these experiments. Next, we describe
the interaction of the various tools of the entire framework.
The framework can be retargeted by changing the cache
configurations and porting the machine description. How-
ever, the largest retargeting overhead constitutes a port of
the compiler. Thus, our current efforts to integrate Gnat/Gcc
into the framework will greatly improve portability.
3.1. Static Cache Simulation
Static cache simulation provides the means to predict the
caching behavior of the instructions and data references of
a program/task (see Static Cache Simulator in Figure 1).
The addresses of instruction references is obtained from
the control-flow information emitted by the compiler. Addresses
of data references are calculated by the Address Calculator
(see Figure 1) from locating data declarations for
global data and obtaining offsets for relative addresses of
local data, which are translated into virtual addresses by
taking the context of a process into account. For both instruction
and data references, the caching behavior is dis-
Category 1st reference consecutive ref.
Always-hit hit hit
Always-miss miss miss
First-hit miss hit
First-miss hit miss
Table
1. Categorizations for Cache Reference
tinguished by the categories described in Table 1. For each
category, the cache behavior of the first reference and consecutive
references is distinguished. Consecutive references
are strictly due to loops since we distinguish function invocations
by their call sites. For data caches, an additional
category, called calculated, denotes the total number of data
cache misses out of all references within a loop for a memory
reference.
A program may consist of a number of loops, possibly
nested and distributed over several functions. For each loop
level, an instruction receives a distinct categorization. The
timing analyzer can then derive tight bounds of execution
time by inspecting the categorizations for each loop level.
Since instruction categorizations have to be determined
by inter-procedural analysis of the entire program, the call
graph of the program has to be analyzed. The method of
static cache analysis traces the origin of calls within the call
graph by distinguishing function instances. Since instruction
categorizations for a function are specified for each
function instance, the timing analyzer can interpret different
caching behaviors depending on the calling sequence to
yield tighter WCET predictions.
The static cache simulator determines the categories of
an instruction based on a novel view of cache memories, using
a variation of iterative inter-procedural data-flow analysis
(DFA). The following information results from DFA:
ffl The abstract cache state describes which program
lines that map into certain cache blocks may potentially
be cached within the control flow.
ffl The linear cache state contains the analog information
in the (hypothetical) absence of loop.
ffl The post-dominator set describes the program lines
certain to still be reached within the control flow.
The above data-flow information can also be reduced
with respect to certain subsets, in particular to check if
the information is available within a certain loop level. A
formal framework for this analysis for instruction and data
caches is described in [31]. The data-flow information provides
the means to derive the above categories, for example
for set-associative instruction caches with multiple levels of
associativity. The following categories are derived for each
loop level of an instruction for the worst-case cache behavior
Always-hit: (on spatial locality within the program line) or
((the instruction is in cache in the absence of loops)
and ((there are no conflicting instructions in the cache
state) or (all conflicts fit into the remaining associativity
levels))).
First-hit: (the instruction was a first-hit for inner loops) or
(it is potentially cached, even without loops and even
for all loop preheaders, it is always executed in the
loop, not all conflicts fit into the remaining associativity
levels but conflicts within the loop fit into the
remaining associativity levels for the loop headers,
even when disregarding loops).
First-miss: the instruction was a first-miss for inner loops,
it is potentially cached, conflicts do not fit into the
remaining associativity levels but the conflicts within
the loop do.
Always-miss: This is the conservative assumption for the
prediction of worst-case execution time when none of
the above conditions apply.
A loop header is an entry block into the loop with at
least one predecessor block outside the loop, called the pre-
header, and at least one predecessor block inside the loop.
3.2. Timing Analysis
The timing analyzer (see Figure 1) calculates the BCET
and WCET by constructing a timing tree, traversing paths
within each loop level, and propagating the timing information
bottom-up within the tree. During the traversal, the timing
analyzer has to simulate hardware characteristics (e.g.,
pipelining) and the instruction categorizations have to be interpreted
The timing analyzer does not have to take the cache configuration
into account. Instead, the instruction categoriza-
tions, as introduced above, are used to interpret the caching
behavior. The approach of splitting cache analysis via static
cache simulation and timing analysis makes the caching aspects
completely transparent to the timing analyzer. Solely
based on the instruction categorizations, the timing analyzer
can derive the WCET by propagating timing predictions
bottom-up within the timing tree.
The timing tree represents the calling structure and the
loop structure of the entire program. As seen in the context
of the static cache simulator, functions are distinguished by
their calling paths into function instances. This allows a
tighter prediction of the WCET due to the enhanced information
about the calling context. Each function instance is
regarded as a loop level (with one iteration) and is represented
as a node in the timing tree. Regular loops within
the program are represented as child nodes of its surrounding
function instance (outer-most loops) or as child nodes
of another loop that they are nested in.
The timing analyzer determines the BCET and WCET
in a bottom-up traversal of the tree. For any node, all possible
paths (sequences of basic blocks) within the current
loop level have to be analyzed, which will be described in
more detail for the WCET. When a child node is encountered
along a path, its WCET is already calculated and can
simply be added to the WCET of the current path, sometimes
with small adjustments. Adjustments are necessary
for transitions from first-misses to first-misses and always-
misses to first-hits between loop levels [1]. For a loop with
n iterations, a fix-point algorithm is used to determine the
cumulative WCET of the loop along a sequence of (possibly
different) paths. Once a pattern of longest paths has been
established, the remaining iterations can be calculated by a
closed formula. In practice, most loops have one longest
path. Thus, the first iteration is needed to adjust the WCET
of child loops along the path, and the second iteration represents
the fix-point time for all remaining iterations. The
scope of the WCET analysis can such be limited to one loop
level at a time, making timing analysis very efficient compared
to an exhaustive analysis of all permutations of paths
within a program. See [1, 11] for a more detailed description
of the timing analyzer and an analog description for the
BCET.
4. Evolutionary Testing (ET)
Evolutionary testing is a new testing approach, which
combines testing with evolutionary computation. In first experiments
the application of ET for examining the temporal
behavior of real-time systems achieved promising results.
In ten experiments performed ET always achieved better results
compared to random testing with respect to effectiveness
as well as efficiency. More extreme execution times
were found by means of evolutionary computation with a
less or equal testing effort than for random testing (see [30]
for more details).
4.1. A Brief Introduction to Evolutionary Compu-
tation
Evolutionary algorithms represent a class of adaptive
search techniques and procedures based on the processes of
natural genetics and Darwin's theory of evolution. They are
characterized by an iterative procedure and work in parallel
on a number of potential solutions, the population of indi-
viduals. In every individual, permissible solution values for
the variables of the optimization problem are coded. Evolutionary
algorithms are particularly suited for problems involving
large numbers of variables and complex input do-
mains. Even for non-linear and poorly understood search
spaces evolutionary algorithms have been used successfully
because of their robustness.
The evolutionary search and optimization process is
based on three fundamental principles: selection, recom-
bination, and mutation. The concept of evolutionary algorithms
is to evolve successive generations of increasingly
better combinations of those parameters, which significantly
effect the overall performance of a design. Starting
with a selection of good individuals, the evolutionary
algorithm achieves the optimum solution by the random exchange
of information between these increasingly fit samples
(recombination) and the introduction of a probability
of independent random change (mutation). The adaptation
of the evolutionary algorithm is achieved by the selection
and reinsertion procedures since these are based on fitness.
The fitness-value is a numerical value, which expresses the
performance of an individual with regard to the current op-
timum. The notion of fitness is essential to the application
of evolutionary algorithms; the degree of success in using
them may depend critically on the definition of a fitness
function that changes neither too rapidly nor too slowly with
the design parameters. Figure 2 gives an overview of a typical
procedure of evolutionary optimization.
Reinsertion
Evaluation
Mutation
Recombination
Selection
Optimization
criteria met?
Initialization
Evaluation
Figure
2. The Process of Evolutionary Com-
putation
At first, a population of guesses to the solution of a problem
is initialized, usually at random. Each individual in the
population is evaluated by calculating its fitness. The results
obtained will range from very poor to good. The remainder
of the algorithm is iterated until the optimum is achieved or
another stopping condition is fulfilled. Pairs of individuals
are selected from the population and are combined in some
way to produce a new guess in an analogous way to biological
reproduction. Selection and combination algorithms are
numerous and vary. A survey can be found in [24].
After recombination the offspring undergoes mutation.
Mutation is the occasional random change of a value, which
alters some features with unpredictable consequences. Mutation
is like a random walk through the search space and is
used to maintain diversity in the population and to keep the
population from prematurely converging on one local solu-
tion. Besides, mutation creates genetic material that may
not be present in the current population [27]. Afterwards,
the new individuals are evaluated for their fitness and replace
those individuals of the original population who have
lower fitness values (reinsertion). Thereby a new population
of individuals develops, which consists of individuals from
the previous generation and newly produced individuals. If
the stopping condition remains unfulfilled, the process described
will be repeated.
4.2. Applying Evolutionary Computation to Testing
Temporal System Behavior
The major objective of testing is to find errors. As described
in section 2, real-time systems are tested for their
logical correctness by standard testing techniques. The fact
that the correctness of real-time systems depends not only
on the logical results of computations but also on providing
the results at the right time adds an extra dimension to
the verification and validation of such systems, namely that
their temporal correctness must be checked. The temporal
behavior of real-time systems is defective when such computations
of input situations exist that violate the specified
timing constraints. Normally, a violation means that outputs
are produced too early or their computation takes too
long. The task of the tester therefore is to find the input situations
with the shortest or longest execution times to check
if they produce a temporal error. This search for the shortest
and longest execution times can be regarded as an optimization
problem to which evolutionary computation seems an
appropriate solution.
Evolutionary computation enables a totally automated
search for extreme execution times. When using evolutionary
optimization for determining the shortest and longest
execution times, each individual of the population represents
a test datum with which the test object is executed. In
our experiments the initial population is generated at ran-
dom. If test data has been obtained by a systematic test,
in principle, these could also be used as initial population.
Thus, the evolutionary approach benefits from the tester's
knowledge of the system under test. For every test datum,
the execution time is measured. The execution time determines
the fitness of the test datum. If one searches for the
WCET, test data with long execution times obtain high fitness
values. Conversely, when searching for the BCET, individuals
with short execution times obtain high fitness val-
ues. Members of the population are selected with regard to
their fitness and subjected to combination and mutation to
generate new test data.
By means of selection, it is decided what test data are
chosen for reproduction. In order to retain the diversity of
the population, and to avoid a rapid convergence against
local optima, not only the fittest individuals are selected,
but also those individuals with low fitness values obtain a
chance of recombination. In our experiments stochastic universal
sampling [2] was used as selection strategy. For the
recombination of test data discrete recombination [21] was
applied, a simple exchange of variable values between individuals
(see Figure 3). The probability of mutating an
individual's variables was set to be inversely proportional
to its number of variables. The more dimensions one individual
has, the smaller is the mutation probability for each
single variable. This mutation rate has been used with success
in a multitude of experiments [24, 27]. It is checked
if the generated test data are in the input domain of the test
object. Then, the individuals produced are also evaluated by
executing the test object with them. Afterwards, the new individuals
are united with the previous generation to form a
new population according to the reinsertion procedures laid
down.
In our experiments we applied a reinsertion strategy with
a generation gap of 90%. The next generation therefore contained
more offspring than parents since 90% of a popula-
tion's individuals were replaced by offspring. This process
repeats itself, starting with selection, until a given stopping
condition is reached, e.g., a certain number of generations is
reached or an execution time is found, which is outside the
Parents Offspring
Figure
3. Discrete Recombination with four
Randomly Defined Crossover Points
specified timing constraints. In this case, a temporal error is
detected. If all the times found meet the timing constraints
specified for the system under test, confidence in the temporal
correctness of the system is substantiated. In all experiments
evolutionary testing was stopped after a predefined
number of generations which we have specified according
to the complexity of the test objects with respect to their
number of input parameters and lines of code (LOC).
5. Verifying Timing Constraints: SA vs. ET
We used SA and ET in five experiments to determine the
BCET and WCET of different systems. Except for the last
two examples described in this section, all programs tested
come from typical real-time systems used in practice. The
test programs were chosen since three of them cover different
areas within industrial applications of the Daimler Benz
company, and the remaining two programs serve as a reference
to related work, where these had been used as examples
for general-purpose algorithms within real-time appli-
cations. The test program also cover a wide range of real-time
applications within graphics, transportation, defense,
numerical analysis and standard algorithms. Of course, the
results are dependent on the hardware/software platform
and are generally not directly transferable from one to another
since the processor speed and the compiler used directly
affect the temporal behavior. All the experiments that
are described in the following were carried out on a SPARCstation
IPX running under Solaris 2.3 with 40 MHz. The execution
times in processor cycles were derived by SA and
ET.
We chose a SPARC IPX platform since this architecture
does not have any caches. At the current stage of devel-
opment, the timing analyzer for SA only supports either instruction
cache categorizations or data cache categorization.
We are working on an extension to support both categorizations
at the same time. For ET the execution times were
measured using the performance measurement tool Quan-
tify, available from Rational. Quantify performs cycle-level
timing though object code instrumentation. Thus, overheads
of the operating system were ruled out, and the execution
times reported were the same for repeated runs with
identical parameters. However, Quantify does not take the
effects of caching into account. Thus, we needed an uncached
architecture to perform our experiments.
The SA approach utilized the pipeline simulation of the
timing analyzer for the experiments. The instruction execution
was simulated for a five-stage pipeline with a through-put
of one instruction per cycle for most cases, as commonly
found in RISC architectures. Load and store instructions
caused a stall of two cycles to access memory. Floating
point instructions resulted in stalls with varying durations,
specified for the best case and worst case of such operations.
The timing analyzer calculates a conservative estimate of
the number of cycles required for an execution based on
path analysis. For the worst case, the estimate is guaranteed
to be greater or equal than the actual WCET. Conversely,
the estimate is less or equal than the actual BCET.
The library of evolutionary algorithms, which we applied
for ET, was a Matlab-based toolbox developed at the
Daimler-Benz laboratories by Hartmut Pohlheim. It provides
a multitude of different evolutionary operators for se-
lection, recombination, mutation, and reinsertion [24]. For
each experiment, the evolutionary algorithms were applied
twice; first, to find the longest execution time, and then the
shortest. The fitness was set equal to either the execution
time measured in processor cycles for the longest path or
its reciprocal for the shortest path. The population size was
varied for the experiments according to the complexity of
the test objects. Pairs of test data were chosen at random and
combined using different operators like discrete recombination
or double crossover depending on the representation of
the individuals. The mutation probability was set inversely
proportional to the length of the individuals. There is no
means of deciding when an optimum path has been found,
and ET was usually allowed to continue for 100 generations
before it was stopped.
5.1. Test Objects
The first example is a simple computer graphics function
in C, which checks whether or not a line is covered
by a given rectangle with its sides parallel to the axes of the
co-ordinate system. The function has two input parameters:
the line given by the co-ordinates of both line end points,
and the rectangle, which is described using the position of
its upper left corner, its width and its height. This amounts
to eight atomic input variables altogether. The function has
107 LOC and contains a total of 37 statements in 16 program
branches.
The second application comes from the field of railroad
control technology. It concerns a safety-critical application
that detects discrepancies between the separate channels in
a redundant system. It has 389 LOC and 512 different input
parameters: binary variables, 384 variables ranging from
0 to 255 and 112 variables with a range of each from 0 to
4095.
The third application concerned comes from the field of
defense electronics. It is an application that extracts characteristics
from images. A picture matrix is analyzed with
regard to its brightness, and the signal-to-noise ratio of its
brightest point and its background is established. The defense
electronics program has 879 LOC and 843 integer
input parameters. The first two input parameters represent
the position of a pixel in a window and lie within the
range 1.1200 and 1.287 respectively. The remaining 841
Program Graphics Railroad Defense Matrix Sort
Method best worst best worst best worst best worst best worst
SA 309 2,602 389 23,466 848 71,350 8,411,378 15,357,471 16,003 24,469,014
actual N/A N/A N/A N/A N/A N/A 10,315,619 13,190,619 20,998 11,872,718
Table
2. Execution Times [cycles]
for Test Programs
parameters define an array of 29 by 29 pixels representing
a graphical input located around the specified position;
each integer describes the pixel color and lies in the range
0.4095.
The fourth sample program multiplies two integer matrices
of size 50 by 50 and stores the result in a third matrix.
Only integer parameters in the range between 0 and 8095
are permissible as elements of the matrices. Matrix operations
are typical for embedded image processing applications
The fifth test program performs a sort of an array of 500
integer numbers using the bubblesort algorithm. Arbitrary
integer values can be sorted. Sorting operations are common
for countless applications within and beyond the area
of real-time systems.
5.2. Experiments
For all test objects mentioned the shortest (best) and
longest (worst) execution times were determined. The results
of the experiments are summarized in Table 2 for the
best case and worst case. The first row depicts the results
for static analysis and the last row shows the measurements
for evolutionary testing. The middle row shows the actual
shortest and longest execution times for the multiplication
of matrices and the bubblesort algorithm that were easily
determined by applying a systematic test. Notice that the
actual execution times could only be determined with certainty
in the absence of caching due to hardware complexities
[31]. The other examples of actual real-time systems
are so complex with regard to their functionality that their
extreme execution times cannot be definitely determined by
a systematic test. For applications used in practice this is
the normal case.
For the computer graphics example SA calculated a
lower bound of 309 processor cycles for the shortest execution
time and an upper bound of 2602 cycles for the longest
execution time. ET discovered a shortest time of 457 cy-
cles, and a longest time of 2176 cycles within 24 genera-
tions. The population size was set to 50. The generation
of 76 additional generations with 3800 test data sets does
not produce any longer or shorter execution times. Thus
the shortest execution times determined vary by 32%, the
longest by 16%.
For the railroad technology example the population size
for ET was increased to 100 because of the complex input
interface of the test object with its more than 500 pa-
rameters. Starting from the first generation a continuous
improvement up to the 100th generation could be observed
for ET. This suggests that ET would find even more extreme
execution times if the number of generations was in-
creased. The shortest execution time found by ET so far
(508 cycles) is nearly 24% above the 389 cycles computed
by SA. The longest execution time determined (22626 cy-
cles) varies only by 4% from the one calculated by SA
(23466 cycles). Therefore, the worst-case execution time
of this example can already be defined very accurately after
100 generations. It can be guaranteed that the maximum
execution time of this task lies between 22626 cycles and
cycles.
The defense electronics program has 843 input parame-
ters. Therefore, the population size in this experiment was
also set to 100. For this example, evolutionary algorithms
were used to generate pictures surrounding a given position.
The number of generations was increased to 300 because of
the large range of the variables and the large number of input
parameters. Again the longest execution time increased
steadily with each new generation and asymptoted towards
the current maximum of 35226 cycles when the run was terminated
after 300 generations. The fastest execution time
was found to be 9095 after 300 generations. Compared to
the results achieved by SA significant differences could be
observed. The estimates for the extreme execution times
calculated by SA are 848 cycles and 71350 cycles. A closer
analysis of possible causes for these deviations lead to the
possibility that certain instructions were assumed to take
different times for their pipeline execution. The instructions
in question are multiply and divide, which account for multiple
cycles during the execution stage. We are currently
trying to isolate these effects for the Quantify tool to allow
a proper comparison with SA.
The next example in the table is the multiplication of
matrices. Due to its functional simplicity the minimum
and maximum run time can very easily be determined
by systematic testing because they represent special input
situations. The longest execution time of 13190619 cycles
results if all elements of both matrices are set to the
largest permissible value (8095). The shortest run time of
cycles results if both matrices are fully initialized
with 0. When ET is applied to the multiplication of
matrices a single individual is made up of 5000 parameters
(2*50*50). The resulting search space is by far the
largest of the examples presented here. For each generation
with 100 individuals 500000 parameter values have to
be generated. Nevertheless, the number of generations for
this example was increased to 2000. When searching for
the longest execution time, a maximum of 13007019 cycles
was found. The evolutionary algorithms had found an execution
time that lies only a good 1% below the absolute
maximum. The longest execution time that was determined
with the help of SA (15357471 cycles) exceeds the absolute
maximum by about 16%. The shortest execution time determined
by the evolutionary algorithms is 12050569 cycles,
which means a deviation of 17% compared to the actual
shortest run time. The deviation of SA is nearly similar: the
execution time of 8411378 cycles lies about 18% below the
actual value.
The last example is the bubblesort algorithm. Again the
determination of the extreme run times is very easy with
the help of a systematic test. The longest execution time
for bubblesort results from the list sorted in reverse and
amounts to 11872718 cycles. The shortest run time results,
of course, from the sorted list, which leads to an execution
time of just 20998 cycles. Once again the longest execution
time found by ET (11826117 cycles) comes close to
the actual maximum. It deviates by less than 1%. The upper
bound (24469014 cycles) for the longest execution time
that was determined by SA exceeds the actual one by more
than 100%. This overestimation is caused by a deficiency of
the algorithm that interpolates the execution time for loops.
In particular, two loops are nested with a loop counter of the
inner loop whose initial value is dependent on the counter
of the outer loop. Currently, the timing analyzer estimates
the number of iterations of the inner loop conservatively as
is the maximum number of iterations for the
outer loop. We are working on a method to handle such
loop dependencies to correctly estimate the number of iterations
for nested loops. In this case, the inner loop has 1n 2
iterations. As a coarse estimate, 12234507 cycles or half the
estimated value should be calculated taking the actual loop
overhead into account, i.e., the value would be around 3%
off the actual value. Further discussions will refer to this adjusted
value. The shortest execution time of the bubblesort
algorithm is only insufficiently evaluated by evolutionary
optimization (1464577 cycles). Although shorter run times
have been continually found over 2000 generations the results
are far from the absolute minimum. For that reason
current work focuses on a detailed analysis of the bubblesort
example and an improvement of the ET results. Also,
our main focus was to bound the WCET since this provides
the means to verify that deadlines cannot be missed, a very
important property of real-time systems. The shortest execution
time determined by SA (16003 cycles) differs by
24% from the absolute minimum.
6. Discussion
The measurements of the last section show that the methods
of static analysis and ET bound the actual execution
times. While SA always estimates the extreme execution
times in such a way that the actual run times possible for
the system will never exceed them, ET provides only actually
occurring execution times. For the worst case, the
estimates of SA provide an upper bound while the measurements
of ET give a lower bound on the actual time.
Conversely, SA's estimates provide a lower bound for the
best-case time while ET's measurements constitute an upper
bound.
In about half of the experiments, the actual execution
times were bounded within \Sigma3% or better with respect to
the range of execution times determined by SA. These results
are directly applicable to schedulability analysis and
provide a high confidence about the range for the actual
WCET. In further cases, the variation between the two approaches
was about \Sigma10%, which may still yield useful
results for schedulability analysis. We regard \Sigma10% as a
threshold for useful results in the sense that larger deviations
between the two methods may not be accurate enough
to guarantee enough processor utilization, even though they
may be safe. For the multiplication of matrices and the defense
example larger variations were detected. This indicates
that both approaches need further investigation to improve
their precision.
The overhead for estimating the extreme execution times
differs for both approaches. ET requires the execution of a
test program over many generations with a large number of
input data, i.e., the overhead is dependent on the actual execution
times of the test object and additional delays caused
by the timing. SA requires a test overhead in the order of
seconds for the tested programs since one simulation suffices
to predict the extreme execution times, i.e., the overhead
is independent of the actual execution times. Instead,
the overhead depends on the complexity of the combined
call graph and control-flow graphs of the entire program and
roughly increases quadratically with the program size. SA
automatically yields not only timing estimates for the entire
application, but also estimates for arbitrary subroutines or
portions of the control flow. To obtain corresponding data
with ET test objects have to be isolated.
A prerequisite for performing SA is the knowledge of the
cycle-level behavior for the target processor that has to be
supplied in configuration files. The ET approach works for
a wide range of timing methods. On one hand, hardware
timers calculating wall-clock time may be used without
knowledge of the actual hardware. This method is highly
portable but subject to interference with hardware and software
components, e.g., caches and operating systems. On
the other hand, cycle-level timing information, excluding
the instrumentation instructions, may be calculated as part
of the program execution, as seen in the above experiments.
The portability of this method is constrained by the portability
of the instrumentation tool.
In summary, the ET approach cannot provide safe timing
guarantees. It measures the actual, running system. ET is
universally applicable to arbitrary architectures and requires
knowledge about the input specification. The SA approach
yields conservative estimates that safely approximate the
actual execution times. It requires knowledge about loop
frequencies and information about the cycle-level behavior
of the actual hardware. New hardware features have to be
implemented in the simulator, which limits the portability
of SA. If hardware details are not known, only the ET approach
can be applied. If the hardware is not available yet
but the specification of the hardware has been supplied, only
the SA approach will yield results.
We regard the two methods as complementary ap-
proaches. Whenever deadlines have to be guaranteed SA
should be used to yield safe estimates of the WCET. ET may
be used additionally to bound the extreme execution times
more precisely. Furthermore, ET may suffice if missed
deadlines can be tolerated sporadically. The WCET for
schedulability analysis should also be derived from SA for
hard real-time environments where soft real-time environments
may choose between SA and ET, or even the mean
between SA and ET. In general, every real-time system
should be tested for its logical as well as temporal correct-
ness. Independent of the methods used during system de-
sign, we recommend to apply ET to validate the temporal
correctness of systems. The confidence in the application is
increased since ET checks for timing violations over many
input configurations.
7. Conclusion and Future Work
This work introduced two methods to verify timing constraints
of actual real-time applications, namely the method
of static analysis and the method of evolutionary testing.
Both methods were implemented and evaluated for a number
of test programs with respect to their prediction of the
worst-case and best-case execution times. The results show
that the methods are complimentary in the sense that they
bound the actual extreme execution times from opposite
ends.
For most of the investigated programs the actual execution
times for the best and worst cases could be guaranteed
with fairly high precision. They are within \Sigma10% of the
mean between the results of both methods relative to the
possible execution times determined by static analysis. Less
precise results were obtained for few experiments indicating
that further improvements for both approaches are necessary
to ensure their general applicability. Current work on
static analysis includes extensions to handle loop dependencies
and integrate the Gnat/Gcc compiler. Current work on
evolutionary testing focuses on the development of robust
algorithms that reduce the probability of getting caught in
local optima. Furthermore, suitable stopping criteria to terminate
the test are to be defined. If the program code is
available the degree of coverage achieved during evolutionary
testing and the observation of the program paths executed
could be an interesting aspect for deciding when to
stop the test. The most promising criteria seem to be branch
and path coverage because of the strong correlation between
the program's control flow, the execution of its statements
and the resulting execution times. The coverage reached
will also be used to assess the test quality when comparing
evolutionary testing with systematic functional testing
another area where we want to intensify our research in
the future, in order to estimate thoroughly the efficiency of
different testing approaches for the examination of real-time
systems' temporal behavior.
In comparison, evolutionary testing should be more
portable but requires extensive experimentation over many
program executions. Static analysis has a lower overhead
for the simulation process but requires detailed information
of hardware characteristics and extensions to the simulation
models for new architectural features. We recommend that
the worst-case execution time for schedulability analysis be
derived from static analysis for hard real-time environments
where soft real-time environments may choose between static
analysis and evolutionary testing. Furthermore, we suggest
that evolutionary testing be used to increase the confidence
in the temporal correctness of the actual, running
system.
--R
Bounding worst-case instruction cache performance
Reducing bias and inefficiency in the selection algorithm.
A portable global optimizer and linker.
Testing timing behavior of real-time software
Adding instruction cache effect to schedulability analysis of preemptive real-time systems
Testing real-time constraints in a process algebraic setting
Timing constraints of real-time systems: Constructs for expressing them
Testing large
Applying compiler techiniques to cache behavior prediction.
Integrating the timing analysis of pipelining and instruction caching.
Efficient worst case timing analysis of data caching.
Analysis of cache-related preemption delay in fixed-priority preemptive scheduling
Cache modeling for real-time software: Beyond direct mapped instruction caches
Functional test case generation for real-time systems
Static Cache Simulation and its Applications.
Timing predictions for multi-level caches
The Art of Software Testing.
Predicting program execution times by analyzing static and dynamic program paths.
Genetic and evolutionary algorithm toolbox for use with matlab - documentation
Calculating the maximum execution time of real-time programs
Free Software Foundation
The Automatic Generation of Software Test Data Using Genetic Algorithms.
Testing real-time systems using genetic algorithms
--TR
--CTR
Sibin Mohan, Worst-case execution time analysis of security policies for deeply embedded real-time systems, ACM SIGBED Review, v.5 n.1, p.1-2, January 2008
Mark Harman , Joachim Wegener, Getting Results from Search-Based Approaches to Software Engineering, Proceedings of the 26th International Conference on Software Engineering, p.728-729, May 23-28, 2004
Kiran Seth , Aravindh Anantaraman , Frank Mueller , Eric Rotenberg, FAST: Frequency-aware static timing analysis, ACM Transactions on Embedded Computing Systems (TECS), v.5 n.1, p.200-224, February 2006
Kaustubh Patil , Kiran Seth , Frank Mueller, Compositional static instruction cache simulation, ACM SIGPLAN Notices, v.39 n.7, July 2004
John Regehr, Random testing of interrupt-driven software, Proceedings of the 5th ACM international conference on Embedded software, September 18-22, 2005, Jersey City, NJ, USA
Ajay Dudani , Frank Mueller , Yifan Zhu, Energy-conserving feedback EDF scheduling for embedded systems with real-time constraints, ACM SIGPLAN Notices, v.37 n.7, July 2002
Aravindh Anantaraman , Kiran Seth , Kaustubh Patil , Eric Rotenberg , Frank Mueller, Virtual simple architecture (VISA): exceeding the complexity limit in safe real-time systems, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Yifan Zhu , Frank Mueller, Feedback EDF Scheduling of Real-Time Tasks Exploiting Dynamic Voltage Scaling, Real-Time Systems, v.31 n.1-3, p.33-63, December 2005
Andr Baresel , David Binkley , Mark Harman , Bogdan Korel, Evolutionary testing in the presence of loop-assigned flags: a testability transformation approach, ACM SIGSOFT Software Engineering Notes, v.29 n.4, July 2004
Dennis Brylow , Jens Palsberg, Deadline Analysis of Interrupt-Driven Software, IEEE Transactions on Software Engineering, v.30 n.10, p.634-655, October 2004 | evolutionary testing;real-time systems;timing analysis;testing;genetic algorithms;static timing analysis |
608017 | Data Squashing by Empirical Likelihood. | Data squashing was introduced by W. DuMouchel, C. Volinsky, T. Johnson, C. Cortes, and D. Pregibon, in Proceedings of the 5th International Conference on KDD (1999). The idea is to scale data sets down to smaller representative samples instead of scaling up algorithms to very large data sets. They report success in learning model coefficients on squashed data. This paper presents a form of data squashing based on empirical likelihood. This method reweights a random sample of data to match certain expected values to the population. The computation required is a relatively easy convex optimization. There is also a theoretical basis to predict when it will and won't produce large gains. In a credit scoring example, empirical likelihood weighting also accelerates the rate at which coefficients are learned. We also investigate the extent to which these benefits translate into improved accuracy, and consider reweighting in conjunction with boosted decision trees. | Introduction
A staple problem in data mining is the construction of classication rules
from data. Some data warehouses are so large, that it becomes impractical
to train a classication rule using all available data. Instead a sample of
the available data may be selected for training. For instance, the Enterprise
Miner from the SAS Institute features the SEMMA process, an acronym in
which the leading \S" stands for \sample".
DuMouchel et al. (1999) introduce \data squashing" to improve upon
sampling. Instead of scaling up algorithms to large data sets, one scales
down the data to suit existing algorithms. And instead of relatively passive
sampling from a large data set, they construct a data set in a way that should
make it suitable for training algorithms on.
Suppose that the original data consist of N pairs (X
Here X i is a vector of predictor variables and Y i is a variable to be predicted
from X i . In data squashing, one constructs a much smaller data set
assigning weights w i . There is not necessarily any connection
between points like x 1 and X 1 with the same index. Indeed a value
like x 1 might not correspond to X i for any i. The idea is that training an
algorithm on n weighted can be much faster than training on
all N original data points. Large speed gains may be expected when the
squashed data t in main memory.
Here is an outline of this paper. Section 2 describes data squashing,
presents a version using empirical likelihood weights, and points out connections
between data squashing, numerical integration, and variance reduction
techniques used in Monte Carlo simulation and survey sampling. Finding the
empirical likelihood weights reduces to a very tractable convex optimization
problem. Empirical likelihood squashing also has theoretical underpinnings
that predict when it will and won't work, as outlined in Section 2.
Section 3 describes a credit scoring problem. The data values in it have
been simulated and distorted by obfuscating transformations, and the variable
names and data source have been hidden for condentiality. But I am
assured that it remains a good test case for algorithms.
Section 4 applies logistic regression to small data samples, with and
without empirical likelihood reweighting. The reweighting accelerates the
rate at which coe-cients are learned. Section 5 replaces logistic regression
with boosted decision trees. Section 6 presents our conclusions. We are
less pleased with the results of squashing than are DuMouchel et al. (1999),
though we describe the sort of problem where we expect squashing to add
the most value. Our dierent conclusions could be due of dierences in the
algorithms, dierences in the way the results are assessed, or simply because
the data sets are dierent.
We conclude this section with some more references. Madigan, Raghavan,
DuMouchel, Nason, Posse & Ridgeway (2000) oer a likelihood based form
of squashing, geared to exploit a user-specied statistical model. Bradley,
Fayyad & Reina (1998) have goals similar to those of DuMouchel et al.
(1999) and Madigan et al. (2000), but instead of representing the data by
a weighted set of points, they employ mixture models. The elements in
the mixtures include Gaussian distributions, multinomial distributions, and
products thereof. Rowe (1983) describes some earlier work in this direction,
but the more recent cited work is much more ambitious as bets the greater
computational power available today.
2 Data Squashing
We begin by outlining the data squashing method of DuMouchel et al. (1999).
Then we cast some older methods in a new light, as special cases of data
squashing.
Our notation diers somewhat from the original. DuMouchel et al. (1999)
do not distinguish predictor and response during squashing, deferring that
distinction to the training stage. This allows the same squashed data set to
be used for multiple prediction problems. They also choose weights w i so
that
so that the average weight is 1, that is
training algorithms are not aected by this scaling, and
in any case it is simple to alternate between these conventions.
DuMouchel et al. (1999) choose (w outlined here. The
rst step is to group the (X vectors into regions. They suggest several
ways to construct regions. In the simplest method, the points (X
region are those that share values for every discrete variable, and also share
values for discretized versions of every continuous variable. For the points
in each region, some low order moments of the non-categorical variables are
computed. Then for each region, a set of points corresponding
weights w i are chosen, so that the weighted moments on the squashed data
match, or nearly match, the unweighted moments on the original data.
For a function of the (X; Y ) pairs. A
moment within a region corresponds to taking for g m a product of powers
of non-categorical variables, multiplied by a function that is one inside that
region and zero outside of it. Let Z
weights would provide a perfect match, withn
and
Given enough moments and regions, ideal weights are not possible, and DuMouchel
et al. (1999) minimize
!m
Zm
instead. Here !m > 0 with larger values for the lower order moments. The
value of (3) is minimized over w i , x i , and y i , for n. For a scalar
valued variable, like a person's age, or the number of children in a household,
the squashed data value need not match any of the sample values. But it is
not allowed to go outside the range of the data. Thus the squashed data may
have records with 2:2 children but should not have records with 3 children.
2.1 Sampling as squashing
Some issues in data mining echo those of sampling. Two good references on
sampling are Cochran (1977) and Lohr (1999).
Simple random sampling can be cast as a trivial version of squashing.
Let be a subset of n distinct
a simple random sample (without replacement) of them. Take
In stratied sampling, the population is partitioned into strata
sample of n h values is taken from
stratum h and the weight nN h =(Nn h ) makes (1) hold for functions g m that
are indicators of the strata.
The regression estimator is used in sampling theory to incorporate a
known value of some population mean. Suppose that (1=N)
Zm are known for . Then form the weights
Z z)S 1
where z
The regression weights satisfy (1) for . The regression
estimator can be shown to subsume stratication by introducing indicator
variables z m . Regression and stratication can also be combined in several
ways.
The regression estimator is also widely used in Monte Carlo simulation.
There it is known as the method of control variates. Two general references
are (Bratley, Fox & Schrage 1987, Ripley 1987). Hesterberg (1995) has a
good presentation of the reweighting approach to control variates.
2.2 Empirical likelihood squashing
The problem with regression weights is that they can take negative values
these may be unusable in some training algorithms. If one
insists that w i 0, then either there are no solutions to (1) and (2), or else
there is an n M 1 dimensional family of solutions. When there are no
solutions, one might either increase n or remove some of the moments from
consideration.
Suppose that there is an n M 1 dimensional family of solutions. It
is natural to pick the one that is somehow closest to having equal weights.
The empirical likelihood weights are those that maximize
subject to
Z. Owen (1990) describes how to compute
these weights. It reduces to minimizing a convex function over a convex
domain, which can be taken to be M dimensional Euclidean space. An Splus
function available in http://www-stat.stanford.edu/owen computes the
empirical likelihood weights.
Empirical likelihood provides one way of picking the weights w i that
are closest to equality. One can also use other distance measures, such
as the Kullback-Liebler distance
or the Hellinger distance
(w 1=2
. Empirical likelihood weights have an advantage in that
their computation is slightly simpler than the alternatives. Minimizing the
Euclidean distance
simpler still, but reduces to the regression
weights (4) that may be negative (Owen 1991).
2.3 Benets of weighting
Stratication, or more generally regression weighting, has the advantage of
reducing the variance of associated estimators. Let h(X a function of
the data cases. Let
From a simple random sample, the estimate of
H is h which has variance
approximately 2
H =n, where
The main error in this approximation is a multiplicative factor 1 n=N which
we take to be virtually one for data squashing.
When the eect of regression weighting is to reduce the variance
to (1 2
H =n where is the correlation between h(X
For M 1 the reduction factor is 1 R 2 where R 2 is proportion of variance
of explained by a linear regression on Z
shows that empirical likelihood reduces the asymptotic variance of estimated
means by the same factor that regression estimators do.
A training method that estimates means more accurately can often be
shown to predict more accurately. In the simplest cases, like linear regres-
sion, the prediction is constructed as a smooth function of some sample
moments. In more complicated settings, like maximum likelihood estimation
a parameter vector is dened by equationsN
@
@
log
and the estimate ^
solves the equationsn
@
@ log f(x
Qin & Lawless (1994) show that empirical likelihood weights produce
variance reduction for
compared to an unweighted estimate. The extent
of the reduction depends on how well Z i are correlated with the derivatives
being averaged in (5). Baggerly (1998) shows that the same reduction holds
for other distance measures such as Kullback-Liebler and Hellinger.
2.4 Diminishing returns
Better parameter estimates translate directly into better prediction rules,
but generally there are diminishing returns. For example, consider a logistic
regression in which
for a parameter vector simplicity that the logistic
regression is in fact accurate, so that knowing means knowing the Bayes
rule. In ordinary sampling the estimate ^
approaches with an error of
order n 1=2 . For weighted misclassication losses, the loss using
is then
typically O(n 1 ) above the Bayes loss (Wol, Stork & Owen 1996). The
reason is that at the Bayes rule the derivative with respect to of the expected
misclassication is zero. We expect an error of approximate form B
if no squashing is used, and of the approximate form B
or empirical likelihood squashing is used, with a xed list of M functions.
Generally we can expect that A 0 A, but if the Bayes error B 0 dominates
the estimation error An 1 , then regression or empirical likelihood squashing
will bring only a small benet.
If the logistic model fails to hold, then instead of taking B 0 to be the
Bayes error, take it to be the best error rate available within the logistic
family.
The squashing method of DuMouchel et al. (1999) adjusts more than the
weights. It also estimates new values x i and y i . Since these are not sampled,
we cannot quote results like those for empirical likelihood squashing.
But we can suppose that searching for x i , y i and w i should have the eect
of matching (or approximately matching) many more than M functions for
the same value of n. This is similar to the way in which a Gauss quadrature
rule that adjusts both the location and weights in numerical integration
(Davis & Rabinowitz 1984), integrates higher order polynomials than one
that only adjust the weights or the locations. By matching more function
values, it should be possible to come closer to the Bayes error rate, but not
of course to reduce the Bayes error rate. Thus we could reasonably expect
an error of the form
Thus we expect squashing, in its various forms, to be eective in cases
where the Bayes error is dominated by sampling or approximation errors.
In particular, settings with a zero Bayes error may benet enormously from
squashing.
2.5 When to expect benets
It is reasonable to expect better model coe-cients from squashing, albeit with
eventually diminishing gains in prediction accuracy. In order to realize gains
in the coe-cients, they must be related to quantities that are correlated with
the values of g m (X precisely, if the vector @ log f(X
well approximated by a linear combination of g m (X then we can expect
an improved estimate of .
It helps to distinguish between local and global features of the data. A
logistic regression uses global features of the data. It is reasonable to expect
that these features could be highly correlated with judiciously chosen global
features
A nearest neighbor method uses local features, such as averages of Y i
over small regions (determined by X values). It is not reasonable to expect
one of these local averages to be correlated with global features of the data.
Therefore squashing with global features g m will not help nearest neighbors
much, and for an improvement, one must consider ways to employ a large
number of local functions g m .
A method like a classication tree would seem a priori to be intermediate.
The rst split is a global feature of the data. The nal splits made, at least
in a large tree, are very local features. Thus squashing with global g m should
help on the rst splits but not the later ones.
3 Example data
This data set inspired by a real commercial problem, but the problem has
been disguised (from me) in order to preserve condentiality.
The training data have 92000 rows and 46 columns. The data arise from
a credit scoring problem, but their source is not known, and the data set
has been transformed and obfuscated, as described below. Each row of data
describes one credit case, and the rows are presented in random order. Each
column contains one variable. The response variable is in column 41, and is
a 0 or 1 describing bad and good credit outcomes respectively. It may have
been possible to attribute a dollar value to each bad or good outcome, but
such dollar values were not in the data I received, and indeed may not have
existed.
The data have roughly 85% good cases although this is not necessarily the
percentage good in the population. Variables 2 through 40 and 42 through
are predictor variables describing the credit history of the case.
The original data values have been transformed. The original values for a
given predictor were put into a vector v of 92000 elements. The transformed
values are z = [(v min(v))=(max(v) min(v))] p where the power p was
chosen at random, independently for each predictor variable. Missing values
remained missing after transformation, and did not contribute to the min(v)
and max(v).
Column 1 is a score variable used to predict the response. It was constructed
with the knowledge of what all the input variables mean. An unknown
and possibly proprietary algorithm was used to generate this column.
This custom-built score serves as a benchmark against which to compare the
performance of training methods.
Missing values in the original data were stored as 9999:0. The missing
values are interpreted as \not available" or \not believed". There are 309262
missing values, about 8:5% of the predictor values. Column 19 was almost
97% missing. Dropping that column and 5 other columns that were more
than 10% missing, left the data with 78165 missing values. Values were
imputed for the other missing entries as described below. The result is 38
remaining predictors.
Prior to building prediction models with this data, a transformation was
applied to each column of predictor values. The non-missing values
were replaced by X ij raised to a power p 0 chosen from among the values
10g. The value p 0 was chosen to maximize a normalized
separation of means,
means and variances over pairs (X
with nonmissing X ij , and n yj is the number of Y ij is not
missing.
Each missing value X p
ij was simply replaced by an imputed value,
0))=2. The idea was to replace the missing values
by ones that were as neutral as possible regarding the classication at hand.
There were 24430 observations with one or more imputed values.
4 Logistic regression
The rst classication method to be applied was simple logistic regression.
The training data contain cases in randomized order. Therefore
a simple random sample is obtained by taking x
was t to the rst n cases for
1000; 2000; 4000; 8000, and 92000. Both weighted and unweighted logistic regressions
were run. For all the weights are 1:0, making the weighted
and unweighted analyses identical.
The weights were chosen so that for each and each y 2
f0; 1g, the weighted mean of those x ij with y matched the unweighted
mean of those X ij with Y y. The reason for such a choice is as follows.
Some simple global classiers are based solely on response group conditional
means, variances and covariances of predictors, so it is reasonable to expect
these conditional means to carry some relevant information. There are too
many predictor variables to allow use of all of the conditional second moments
The conditional moments can be matched by imposing equation (1) with
taking
The weights for are shown in Figure 1. The smallest weight is
0:35 and the largest is 3:25. As n increases the weights become more nearly
equal to one. For it was not possible to reweight the data to match
the conditional moments, using only positive weights. This is why
is the smallest sample size we use.
Figure
shows how the Euclidean distance between estimated coe-cient
vectors and the full data coe-cient vector decreases as n increases. The
decrease is faster for the empirical likelihood weighted estimates. In terms of
accuracy in estimating coe-cients, empirical likelihood weighting increases
the eective sample size by roughly 4.
Figure
1: Shown are the empirical likelihood weights for the credit scoring
data for
Increased accuracy in coe-cient estimation leads to increased accuracy
in classication, but with diminishing returns. Figure 3 shows receiver operating
characteristic (ROC) curves for several classiers, described below,
on this data. An ROC curve can be plotted for any classier that produces
a score function (X) from the predictors. The interpretation of the score
Figure
2: Shown are the distances between the logistic regression coe-cients
for all 92; 000 sample points and those based on subsamples. The lower line
is for weighted logistic regressions using empirical likelihood weights.
function is that larger values of (X) make likely. A point is
classied as only if (X) > 0 , where the threshold 0 is chosen
to trade o the error rates of false positive and false negative predictions. The
ROC curve plots the proportion of the good
versus the proportion of the bad As 0
decreases from 1 to 1 the ROC curve arcs from (0; 0) to (1; 1).
The top ROC curve in Figure 3 corresponds to the customized score vector
supplied with the data. The other solid lines correspond to empirical
likelihood weighted logistic regressions on n points for
8000, 92000. These lines increase with increasing n. The dashed lines correspond
to unweighted logistic regression for
92000, the weighted and unweighted ROC curves are the same.
There is a reference point at (0:2; 0:8). This describes a hypothetical classication
in which the rule accepts 80% of the good cases and only 20% of the
bad ones. The custom rule is nearly this good.
ROC curves tend to make performance dierences among classiers look
very small. Part of the reason is that the underlying probabilities are plotted
over ranges from 0% to 100%, while important distinctions among real clas-
siers can be much smaller than this. For example the dierence between
75% and 80% acceptance of good cases, while small on a plot like this, is
likely to be of practical importance.
Despite this, it is clear that there are diminishing returns as n increases,
whether weighted or unweighted. Logistic regression on 8000 cases produces
an ROC curve that essentially overlaps the logistic regression on all 92000
Figure
3: Shown are ROC curves for logistic regressions and a proprietary
score. The percent of good cases classied as good is plotted against the
percent of bad cases classied as good. For example, the point at (0:2; 0:8)
describes an unrealized setting in which 80% of good cases would be accepted,
along with only 20% of bad cases. The solid curves, from top to bottom are
for: a proprietary score, empirical likelihood weighted logistic regression on
samples of sizes 92000, 8000, 4000, 2000, and 1000. The dashed curves, from
top to bottom are for unweighted logistic regression on 8000, 4000, 2000, and
1000 cases. The curves overlap signicantly, as described in the text.
cases. Empirical likelihood weighting produces such overlap at a smaller
sample, perhaps Although the coe-cients keep getting better,
performance tends to converge to a limit. It is reasonable to expect that
better squashing techniques would get logistic regressions as good as the full
data logistic regression at even smaller sample sizes than empirical likelihood
weighted logistic regression does.
The ROC curves in Figure 3 are computed on N points including the
points used for training. But, there is little risk of overtting here. The
sample sizes n are all either very large compared to 39 or small compared to
(or both). As evidence that these logistic regressions do not overt, notice
that logistic regression on all 92000 cases has not produced an ROC curve
much better than one on 4000 cases.
5 Boosted Trees
Logistic regression is by now a fairly old classication technique. More modern
classication methods can also make use of observation weights. We also
considered boosted classication trees. Boosted classication trees make predictions
by combining a very large number of typically small classication
trees. In the extreme, the individual trees have only one split. Taking a
weighted sum of such stumps produces an additive model.
Friedman (1999a) and Friedman (1999b) describe Multiple Additive Regression
Tree, or MART, modeling for constructing boosted tree classiers.
This builds on earlier work by Friedman, Hastie & Tibshirani (1999) which
built in turn on Freund & Schapire (1996).
ROC curves were obtained for MART using samples of size 1000, 2000,
4000, 8000 and 92000, using both empirical likelihood weighted and un-weighted
analyses. When plotted, these ROC curves tend to be very hard to
distinguish from each other as well as from those of logistic regression and
the customized method. As in Figure 3 the curves separate the most visually,
over the interval between 0:1 and 0:2 on the horizontal axis. Over that range
they are roughly parallel with some crossings among close curves.
Custom 0.217 0.479 0.651 0.792 0.9448 0.99217 0.99761 0.99895 0.999793
Mart 0.190 0.485 0.634 0.774 0.9433 0.99172 0.99733 0.99891 0.999871
Logistic 0.163 0.431 0.604 0.754 0.9244 0.99026 0.99720 0.99881 0.999858
Mart 4 0.188 0.456 0.626 0.770 0.9361 0.99150 0.99689 0.99877 0.999832
Mart 8 0.189 0.477 0.636 0.774 0.9419 0.99147 0.99707 0.99889 0.999871
Mart 1w 0.143 0.430 0.585 0.745 0.9238 0.98980 0.99656 0.99844 0.999651
Mart 2w 0.178 0.432 0.598 0.750 0.9274 0.98830 0.99571 0.99829 0.999625
Mart 4w 0.170 0.431 0.599 0.753 0.9326 0.98950 0.99624 0.99846 0.999754
Mart 8w 0.183 0.477 0.633 0.775 0.9435 0.99163 0.99720 0.99885 0.999819
Table
1: ROC values for boosted trees. Shown are the heights of 11 ROC
curves, corresponding to 11 methods as described in the text. The ROC
curves are evaluated at horizontal values given in the top row.
Table
show numerical values from these ROC curves. Values smaller
than 0:5 are given to 3 signicant places, while values close to 1 are given so
that their dierence from 1 may be computed to 3 signicant places. In the
region over (0:10; 0:20) both weighted and unweighed MART models tend to
do better on larger sample sizes. The use of weights sometimes helps and
sometimes hurts, but does not seem to make much dierence. MART models
respond to both global and local features of the data. We anticipated that
weighting might help the global portion but not the local one. It does not
appear that weights greatly accelerate MART.
We also investigated boosted trees using an evaluation copy of Mineset.
We were unable to obtain results better than logistic regression for this data,
and there did not appear to be any benet to using empirical likelihood
weights, even when boosting stumps (which are global in nature).
6 Discussion
The results for empirical likelihood based data squashing are not as encouraging
as those in the original paper by DuMouchel et al. (1999). Here we
outline the dierences, and then describe where more positive results might
be expected.
First, they based their comparisons primarily on the quality of estimated
logistic regression coe-cients. Like them, we get good results for coe-
cients, but nd diminishing returns for classication performance. They also
compare predicted probabilities from squashed models to predicted probabilities
from the full data set. Such probabilities are deterministic functions
of coe-cients and so they won't show diminishing returns the way that mis-
classication rates do.
A second dierence is that we report results on some local methods in
addition to global ones, and found little benet there. This is an area where
more ambitious squashing as described in DuMouchel et al. (1999) might be
able to make a big improvement.
Thirdly, it is reasonable to expect that optimistic results are entirely
appropriate on one data set and not on another. More data sets will need
to be investigated. Their data set had only 7 predictors while we used 38.
As a consequence they were able to look at interactions, where we did not
consider our sample sizes large enough for that. Nor is it reasonable to match
all interaction moments in our case. Both data sets were of comparable total
size, because they had 744963 records compared to our 92000.
We should point out that the original motivation for squashing is speed,
although much of this article stresses accuracy. The reason is that essentially
the same speed gains can be achieved by sampling. So for squashing to
represent a gain over sampling, it should be more accurate for the same n.
The diminishing returns suggest that for some small n squashing could be
much better than sampling, but for larger n the practical value will disappear.
This suggests that squashing will be most useful on problems where even
when one lls computer memory with data, one is undersampling.
Here are some settings which maximize the promise of squashing. First,
problems with near zero Bayes error might benet more from squashing.
Secondly, while in classication one only needs to compute a score on the
right side of a threshold, in other problems one must predict a numerical
value (e.g. prot versus protable). Here the diminishing returns might set
in much later. Third, when the records have only 7 or 38 predictors a very
large n will t in memory. But when the records have many thousands or
millions of predictors, much smaller values of n will t in memory and there
could be more to gain from some form of squashing.
Finally, the squashing described in DuMouchel et al. (1999) might serve
as a good data obfuscation device. An organization could release a squashed
training data set and a squashed test set for researchers to evaluate learning
methods, without ever releasing a single condential data record.
Acknowledgements
I thank Bruce Hoadley for valuable discussions on data mining and Jerome
Friedman for making available an early version of his MART code. This work
was supported by NSF grants DMS-9704495 and DMS-0072445.
--R
Scaling clustering algorithms to large databases
A Guide to Simulation (Second Edition)
Methods of Numerical Integration (2nd
Squashing at
Additive logistic regression: a statistical view of boosting
Stochastic Simulation
--TR | reweighting;misclassification loss;MART;database abstraction;credit scoring |
608041 | Characterization of E-Commerce Traffic. | The World Wide Web has achieved immense popularity in the business world. It is thus essential to characterize the traffic behavior at these sites, a study that will facilitate the design and development of high-performance, reliable e-commerce servers. This paper makes an effort in this direction. Aggregated traffic arriving at a Business-to-Business (B2B) and a Business-to-Consumer (B2C) e-commerce site was collected and analyzed. High degree of self-similarity was found in the traffic (higher than that observed in general Web-environment). Heavy-tailed behavior of transfer times was established at both the sites. Traditionally this behavior has been attributed to the distribution of transfer sizes, which was not the case in B2C space. This implies that the heavy-tailed transfer times are actually caused by the behavior of back-end service time. In B2B space, transfer-sizes were found to be heavy-tailed. A detailed study of the traffic and load at the back-end servers was also conducted and the inferences are included in this paper. | Introduction
The explosive popularity of Internet has propelled its usage
in several commercial avenues. E-commerce, the usage of
Internet for buying and selling products, has found a major
presence in today's economy. E-commerce sites provide
up-to-date information and services about products to users
and other businesses. Services ranging from personalized
shopping to automated interaction between corporations are
provided by these web-sites. It has been reported that e-commerce
sites generated $132 Billion in 2000, more than
double of the $58 Billion reported in 1999 [1]. Even though
the power of the servers hosting e-commerce sites has been
increasing, e-commerce sites have been unable to improve
their level of service provided to the users. It has been reported
that around $420 Million has been lost in revenues
due to slow processing of the transactions in 1999. Thus it
is desirable and necessary to focus on the performance of
the servers used in these environments.
There are two main classes of e-commerce sites,
Business-to-Business (B2B) and Business-to-Consumer
(B2C), providing services to corporations and individual
users respectively. Web sites like Delphi, which provide services
to corporations like General Motors come under B2B
sites, whereas sites like Amazon.com providing services to
general users come under B2C sites.
This work was supported in part by the National Science Foundation.
In this paper, we have analyzed the characteristics of e-commerce
traffic. Traffic from a B2C and a B2B site is being
used for the study. The workload is initially inspected
for understanding the diurnal nature of the traffic. Different
load periods were identified for both the B2C and B2B en-
vironments. These have been found to be complimentary in
nature, which may be intuitive. A set of parameters were
chosen for each site for each component which would impact
the performance of the system to the maximum extent.
Statistical tests are then used to prove the self-similar nature
of the traffic at different scales. Two different tests are
used for validating the results for each of the parameters.
It has been observed that the arrival traffic is highly bursty
in nature, much more than the burstiness seen in normal
web-traffic [5]. The response-time distribution is found to
be heavy-tailed. This has been previously attributed to the
heavy-tailed nature of request and response file-sizes. But
the behavior of transfer sizes is not heavy-tailed, unlike the
general web-environment. The traffic arriving at the back-end
servers is characterized to obtain similar statistics about
the impact of burstiness on the system. Also preliminary
tests have shown that the back-end utilization is more bursty
than the front-end server utilization, the reasons for which
are explained later. A correlation is drawn between the behavior
of the front-end and the back-end servers under different
load conditions. Performance implications from the
results of the above experiments will give valuable information
for improving e-commerce server performance.
The workload characterization studied in this paper is
based on one representative system from each of the environments
(B2C and B2B). Considering the difficulty in obtaining
this valuable and guarded information from the e-commerce
sites, and the fact the sites we have considered
are quite busy, the results, although preliminary, could be
valuable for future studies on e-commerce workload characterization
and server designs.
The rest of the paper is organized as follows. Related
work is outlined in Section 2. Section 3 discusses the architecture
of e-commerce sites along with a description of the
configuration of the sites used for this study. Section 4 discusses
the behavior of the workload and the traffic and load
characteristics of the front-end and back-end servers. The
concluding remarks are sketched in Section 5.
Note: For this study data from two popular sites (one
B2C and the other B2B) was used. Due to a non-disclosure
agreement (NDA), the identity of these sites is not revealed.
Throughout this work the two sites are identified as B2C site
and B2B site. Without the NDA, we would not have been
able to acquire the data for the study.
Related Work
Although there have been several studies reported on the
workload characetrization of general web servers [3, 7, 10,
13], only a few studies have been reported on the characterization
of e-commerce traffic based on the client behavior.
The main reason for this shortcoming is the unavailability
of representative data. E-commerce sites have highly secure
information in the traces and access logs. Due to the
security implications e-commerce sites are reluctant to divulge
this information for research purposes. Due to this,
studies in this field are still in the preliminary stages. In [8]
the authors have developed a resource utilization model for
a server which represents the behavior of groups of users
based on their usage of the site.
It should also be noted that the existing work reported on
e-commerce traffic has been done on the front-end servers
only and to the best of our knowledge nothing has been reported
on the back-end servers. The back-end servers are the
ones which experience the maximum load in an e-commerce
environment [4]. We would like to characterize the load on
the back-end servers along with a study of the system characteristics
collected from system logs in E-commerce sites.
3 E-Commerce Architecture
A generic organization of e-commerce sites is depicted in
Figure
1. E-commerce sites can be broadly classified into
two different categories. Business to Business (B2B) and
Business to Consumer (B2C). The main difference between
the two categories of sites lies in the user population accessing
these sites. B2B sites serve transactions between different
businesses whereas B2C sites serve general users over
the Internet.
INTERNET
USERS
Edge router/
ISP Provider
Cache m/cs
Load Balancer
Web Servers
Firewall
Database
Servers
Front-End
Back-End
Figure
1: A Generic E-Commerce Site.
Business-to-Business: One of the main characteristics of
this category of sites is the regularity in the arrival traffic.
It was observed that heavy traffic comes between 9am to
5pm, normal business hours. Regularity does not imply the
lack of heavy spikes in the traffic. There will be sustained
peak load on the system either due to seasonal effects or
due to the availability of different services at the site. These
sites can be categorized by the high amount of buying taking
place in them. It has been observed that the percentage of
transactions resulting in buying are very high compared to
those in B2C environment.
Business-to-Consumer: B2C servers are the normal e-commerce
sites where any user can get service. The security
involved in B2C site is only restricted to any financial
transactions involved, whereas in a B2B environment all the
transactions are normally done in secure mode. One implication
of this is that increased buying in a B2C environment
can throttle the system since the designed system does not
expect high percentage of buy transactions. Another important
characteristic of a B2C site is the very low tolerance
to delayed responses. This increases the need to make QoS
more important than providing absolute security for all the
transactions, hence security is reserved for transactions involving
buying.
3.1 Front-End and Back-End Servers
Typically the front-end servers are comprised of the web
server, application server, server load balancer, and the secure
socket layer (SSL) off-loader. Front-end web servers
serve requests from the clients and are the only authorized
hosts able to access the back-end database and application
services as necessary. The application servers are responsible
for the business logic services. The application server
will be the most heavily loaded server in the B2C envi-
ronment. This is due to the heavy traffic of dynamic and
secure requests arriving at the server. In a large scale e-commerce
site, there will be dedicated application servers,
alternatively these servers can be combined with the Web
Servers or the Database servers. Due to the heavy traffic
seen by e-commerce servers and also due to the availability
requirements, there will be a network of web servers instead
of a single monolithic server at the front-end. This
basically improves the scalability and fault-tolerance of the
server to any bursts of busy traffic. Load balancers help increase
the scalability of an e-commerce site. Load balancing
works by distributing user requests among a group of
servers that appear as single virtual server to the end user.
SSL is a user authentication protocol developed by Netscape
using RSA Data Security's encryption technology. Many
commerce transaction-oriented web sites that request credit
card or personal information use SSL. The SSL off-loader
typically decrypts all https requests arriving at the server.
The back-end servers mainly comprise of the database
servers and the firewall which would protect sensitive data
from being accessed by unauthorized clients. These firewalls
provide security services through connection con-
trol. They are predominantly used when protecting mission-critical
or sensitive data is of utmost importance. The
database servers reside in the back-end of the network and
house the data for e-commerce transactions as well as sensitive
customer information. This is commonly referred to
as the data services. The clients do not directly connect to
these servers, the front-end Web servers initiate connections
to these servers when a client conducts a series of actions
such as logging in, checking inventory, or placing an order.
Most e-commerce sites scale up their database servers for
scalability and implement fail-over clustering for high avail-
ability. Partitioned databases, where segments of data are
stored on separate database servers, are also used to enhance
scalability and high availability in a scale-out fashion.
3.2 B2C Configuration
A simplified configuration of the B2C site being used for
the study is given in Figure 2. This site comprises of ten
web servers, each one powered by a Intel Quad P-III systems
with a 512MB of RAM. The web servers run IIS 4.0
HTTP server. This cluster of web servers is supported by
three image servers, each one powered by a Dual P-II sys-
tem. As can be seen from the figure, the image servers serve
both the database servers and the front-end web servers. For
the purpose of our study, the image servers were considered
to be in the back-end system. The product catalog server,
connected to both the front-end and the back-end, runs an
NT 4.0 providing backup and SMTP services to the back-end
servers. The LDAP server is connected to the back-end.
Figure
2: Simplified configuration of the B2C site.
3.3 B2B Configuration
In the B2B space, the design of e-commerce sites is completely
different from their design in B2C space. Here the
user population is known a-priori. The transactions being
processed by each user arriving at the server is also known
with reasonable bounds. B2B sites serve a limited population
as opposed to B2C sites which aim at serving the entire
Internet. These aspects enable the designers to customize
the site to specific user requirements.
Scalability is one of the main issue that has to be taken
care of when designing such customized system. So the design
is done as a cluster of B2C sites, interconnected to form
a large B2B portal. The interconnections between the individual
B2C components in the site determine the user population
to that site and also the services provided by that site.
Figure
3 shows a simplified version of the B2B site being
used for the study. Each of the web servers, can be individually
used as a B2C site with its own database and network
connection.
Figure
3: Simplified configuration of the B2B site.
4 Workload Characterization
In this study we have analyzed the behavior of e-commerce
servers with relation to the behavior of the incoming traf-
fic. Data was collected at different levels in the system.
Web Server access logs from the the front-end and the back-end
servers were collected at a granularity of 1 sec. This is
an application level data giving the load on the httpd. This
data will give the characteristics of the traffic arriving at the
system, average network bandwidth utilization, and the file
transfer rate.
For the system level information, data was collected from
the Performance logs from all the servers present in the site.
This data was collected at a granularity of 5 sec. This would
give information about the I/O bandwidth used, the processor
and disk utilization of the system etc. Data was collected
at a constant rate of 5 sec intervals. So this data is at
a higher scale than the logs from the web servers. But both
the scales are below the non-stationarity time scale used for
the analysis.
Data was collected at the server and the performance
monitor for an entire day. A weekday is used for data collection
since this would represent the average traffic. Addition-
ally, data for a five day period was used to study the average
behavior of the traffic over a long period of time.
4.1 Characteristics of the Workload
The main differences between general web and e-commerce
workload are the following.
1. Presence of a high level of Online Transaction Processing
(OLTP) activity is observed among the transactions
at the server. This is due to the database transactions
accruing for every request from the user. Due to security
reasons most of the data is present in the database
server which is protected by a secure firewall. This prevents
the web server from responding to most of the requests
without sending a query to the back-end server.
2. A large proportion of requests come in secure mode.
B2C traffic has lesser secure traffic, B2B sites experience
almost complete secure traffic from users. This
is due to the heavy security constraints present in industry
to industry transactions. Increased amount of
Arrival
Rate
(Reqs/sec)
Time
Figure
4: Arrival Process at B2C site.51525
Arrival
Rate
(Req/sec)
Time (Bucket 6 secs)
Figure
5: Arrival process at B2B site.
secure transactions implies heavy processing load at
the front-end server. Most of the sites have SSL off-
loaders, which do encryption/decryption of requests to
reduce the load on the system. This process adds to
the response time. Aggregating these transactions with
normal transactions increases the variability in the response
times observed by the user.
3. The proportion of dynamic requests (that require some
amount of processing) is very high, as was expected. In
fact, in most e-commerce sites almost all requests are
handled as dynamic requests.
4.2 Front-End Characterization
A visual inspection reveals the workload at e-commerce
sites to be more bursty than normal web workload. To study
this behavior, the following parameters were used, which
would have the maximum impact on the behavior of the traf-
fic: Arrival process, Utilization of the server, Response time,
Request file sizes, and Response file sizes.
4.2.1 Arrival Process
Figures
4 and 5 show the arrival process at the B2C and
e-commerce sites for an entire day (12am-12am). The
data shows traffic on a normal weekday with an average arrival
rate of 0.65 requests/sec at the front-end web server
for the B2C site and around 1 req/sec arrival rate at one of
the web servers in the B2B site. A visual inspection reveals
the burstiness in the arrival process. The B2C server is a
4P system with an average processor utilization of 6% per
processor and disk utilization of 2% during the period starting
from 9.00am till 6.00pm. The low utilization is typical
of e-commerce sites since they are designed for much higher
load and sustain a very minimal load during normal working
periods. It is the high load periods showing bursts of orders
of magnitude more than normal operating parameters which
cause concern for better capacity planning and performance
analysis of these systems.
Figures
4 and 5 show that the sites have distinct high
and low load periods during the course of a day. For the
B2C site, busy period starts around 6:00pm in the evening
and ends at around 11:00pm in the night. Since this is a
serving general consumers, the traffic is heavy during
the after-office periods. Distinctive low periods during
the morning between 7:30am to 11:30am can also be ob-
served. In case of the B2B site, the traffic concentration
lies mostly during normal office hours, between 9:00am to
8:00pm, which is intuitive. It should be noticed that the
graphs show aggregated arrival traffic for the B2B site and
the averaged arrival process for the B2C site.
The Arby-Veitch (AV) [11] estimator test was used for
estimating the Hurst-parameter (H-parameter) [6] for the arrival
time-series. This is known to be a reliable test for workloads
with busy periods showing a non-stationary behavior.
Hurst parameter is also calculated using the R/S plot test [6].
Reliability of this test under low time-scales for e-commerce
traffic is tested by comparing the H-parameters obtained using
the two methods.
The H-parameter is estimated to be 0.662 using the AV
test. This shows that the arrival process at the B2C site is
self-similar in nature. The Hurst parameter is also estimated
to be 0.662 using the R/S plot test for the B2C site, which
matches the estimation made by the AV-estimator. Similar
test was done for the arrival traffic at the B2B site. Using
the AV-estimator the H-parameter was estimated at 0.69,
whereas the R/S plot gave an estimate of 0.70 for the H-
parameter, which is a good approximation.
4.2.2 Processor Utilization
Figures
6 and 7 show the utilization of the front-end web
server for the B2C and B2B sites respectively. As explained
earlier, this data is collected between 9:00am till 5:00pm at
a granularity of 5 secs for the B2C site. For the B2B site the
data represents the activity between 10:00 am in the morning
till 9:30 am the next day morning. The B2C server sustains
a constant load throughout the day, with an average load of
7% on each of the four processors. High and low load periods
can be observed on the B2C server during the course
of the day. This behavior is absent in the B2B server. This
is due to the a-priori knowledge of the transactions and load
from users in the B2B space. B2B sites are customized for
specific traffic patterns and a normal traffic would not affect
the load on the system to a higher degree. Thus the load
on the system appears almost constant even though there is
a variation in the arrival rate at the server. The time-series
obtained from the utilization was also tested for self-similar
behavior. The AV-wavelet based test and the R/S plot test
are used for estimating the H-parameter. The estimated H-
parameter is 0.755 using the AV estimator, and 0.77 using
Utilization
(4P)
Time (secs)
Figure
Utilization of the front-end web-server
at B2C site10305070
Utilization
Time (buckets 5 secs)
Figure
7: Processor(2P) Utilization of one of the front-end
web-servers at B2B site
the R/S plot test for the B2C site. In the B2B space, the load
on the system did not have a high degree of self-similarity.
The H-parameter is estimated to be 0.66 using both the AV-
estimator and the R/S plot test. Due to a balanced load on
the B2B system throughout the duration, the degree of self-similarity
is very low. The effect of the arrival process is not
seen in the overall load sustained by the B2B server.
A higher H-parameter implies an increased degree of self-
similarity. Utilization is a factor of the response-time and
the arrival process. The inherent burstiness in the arrival
process is already established in the previous section. The
higher H value can be attributed to long-range dependence
in the service process.
4.2.3 Response Time
In
Figure
8, the response time observed by the users over
the entire day period is shown for the B2C site. Previous
studies [5, 12] have concentrated on the study of the heavy-tailed
behavior of web response times. In this work the
response-time distribution is converted into a time-series by
aggregating the response-times seen for non-overlapping intervals
of 5 secs. Even though the times seen are not the
actual response times observed by the user, they can be used
for time-series analysis. Only a multiplicative factor of 1/5
will be required to get the actual response-times. The time-series
obtained is checked for self-similarity and any non-stationary
behavior. The AV test and R/S plot test are used
for estimating the H-parameter (A-V test
As explained earlier, a good estimation of2000006000001e+061.4e+060 2000 4000 6000 8000 10000 12000 14000 16000 18000
Aggregate
Response
Time
Time (5 sec buckets)
response time
Figure
8: Aggregate Response time at the front-end Web-Server
(4P), B2C site
H-parameter is obtained using R/S test, only when the time-series
is stationary. So both the tests are used for estimating
the H-parameter.
Response time is one of the very important performance
metrics in the design and analysis of any server system.
High burstiness in the arrival traffic implies saturating server
queues, leading to high response times. Studies have shown
that the 90th percentile response-times can be used for predicting
the mean response-time [2]. This measure cannot
be used in presence of high burstiness in the response-time
distribution. Figure 8 shows response times orders of magnitude
higher during the high load periods in the evening.
Comparing this graph with the arrival process shown in Figure
4, unmistakable correlation can be found between the
different load periods. Even though the utilization of the
system does not get effected, buffer queue lengths increase
thereby increasing the user perceived response times. Increased
burstiness impacts the overall response time of the
system to a higher extent than the arrival process. This
burstiness in the response time is a factor of the back-end
data retrieval time and the server processing time.
4.2.4 Request/Response File Sizes
The request and response file sizes in web environment [5, 9]
have been studied previously. It was observed that these distributions
show a heavy-tailed behavior with a tail weight
of approximately
bytes [5]. This was considered one of the main reasons for
the heavy-tailed behavior of the web response times. In
e-commerce environment, it has already been shown that
transfer times have a heavy tailed behavior with In
this section the behavior of transfer size distribution is studied
Figures
show the request and response size
distribution over the observation period at the B2C server.
It can be observed that the distribution of transfer sizes is
fairly constant in the B2C environment. A visual inspection
rules out the possibility of heavy burstiness in the aggregated
time-series obtained from the transfer sizes. The distribution
of request sizes is further investigated for heavy-tailed behavior
using Log-Log cumulative distribution plots (LLCD)
plots [5].
Figure
11 shows the log-scale plot of the cumulative
probability function over the different request sizes
observed. The plot appears linear after x > 2:5. A linear-regression
fit to the points for requests more than 320 Bytes
Request
Size
(Bytes)
Requests arrival
Figure
9: Request size distribution over time, B2C site20000600001000001400001800000 10000 20000 30000 40000 50000
Response
Size
(Bytes)
Time (secs)
Figure
10: Response size distribution over time, B2C site
gives a line with slope This gives
an estimate of = 4.12 thereby indicating that the request
size distribution is not heavy-tailed in nature. This result refutes
the previous results about web traffic. In [5] the authors
found that the requests also follow a heavy tail distribution
with Using similar tests, we also infer that the
response file sizes do not follow a heavy tailed distribution.
4.3 Performance Implications
Previous studies on web traffic and LAN traffic have attributed
the self-similar behavior of network traffic to the
aggregation of long-range dependent ON/OFF processes.
In e-commerce space, the response-times are found to be
heavy-tailed in nature even though the request and response
file sizes are almost a constant. The heavy-tailed behavior
of response-times in web environment was believed to
be caused by the heavy-tailed behavior of the file transfer
sizes in the web environment. In e-commerce environment,
the transfer sizes do not follow a heavy-tailed distribution
as shown earlier in this section. Heavy-tailed behavior of
web transfer sizes are fundamentally caused by the inclusion
of image and video files in the overall traffic. Since
these files are minimized in e-commerce environment (for
reducing the overhead in response times), the behavior of
the transfer sizes becomes somewhat intuitive. The lack of
large image and video files removes the heavy-tailed nature
of e-commerce traffic.
It is observed that the response time still shows a heavy-tailed
behavior in both B2C and B2B space. As explained
earlier this implies that the user perceived response-time can
increase by orders of magnitude under load conditions. Due
Log10(Request file size)
Figure
11: LLCD of request size distribution, B2C site
to the critical nature of e-commerce applications and also the
business model (increasing criticality with the increase in
load), it is imperative that the response-times are kept under
normal bounds even in high load conditions. In e-commerce
environment response-time is mainly dependent on the processing
time and the transfer time. Since the file-sizes do
not follow a heavy-tailed distribution, it can be safely assumed
that the transfer time does not contribute to the variation
in the response-time. This shows that the characteristic
of the processing time is affecting the response-time to a
higher extent than the response size. Also the effect of file-
sizes appears to be negligible on the end-end response-times
observed. This result contradicts the behavior of response-times
for normal web traffic where the response-size of files
can be assumed as a good approximation of the response-
time. The difference is that, in web environment the transfer
times consumes most portion of the response-time which is
not the case in e-commerce environment due to the different
composition of requests.
4.4 Back-End Characterization
The most important and sensitive information in e-commerce
servers is kept in the back-end servers. It is the
back-end servers that execute the business logic for the e-commerce
site and are hence the most crucial components
of any e-commerce server. The parameters used for doing
the characterization depend mostly on the configuration of
the site and the purpose of the individual components [13]
in the back-end. The composition of back-end servers is
closely dictated by the business model of the site. So different
parameters might be interesting for different sites. In
this study processor utilization and disk accesses are used
for studying the characteristics of the two sites.
In the B2C site there are four different servers at the back-
end. These are: Main database server, Customer database
server, Image server, and LDAP server. The image server
and LDAP server are not heavily loaded during the observation
period. There is a single burst of traffic to and from
these servers when the data is updated daily. This burst is
also seen in other back-end databases and will be discussed
in detail later in this chapter. The only servers that experience
a sustained load throughout the day are the customer
database and the main database. These two servers are used
for studying the characteristics of the back-end system.
Processor
Utilization
(4P)
Time (from 9:33:13)
Figure
12: Processor utilization of Catalog Server (5 secs),
Processor
Utilization
Time (from 9:33:13)
Figure
13: Processor utilization of the Main D/B server (5
secs), B2B site
4.4.1 Processor Utilization
In
Figures
12 and 13 the processor utilization of the two
back-end servers in the B2C site is shown. It can be observed
that the back-end server experiences a sustained load
of 10% on average over the entire period. There is a visible
peak of almost 100% utilization of the catalog server.
This will be discussed later in the section. For the Main D/B
server, the utilization remains at around 30% for most of the
observation period. This shows that the load on back-end
servers is higher than on the front-end servers, when compared
with figure 6. Previous studies have speculated that
the load on the back-end servers is more regulated due to the
presence of the front-end server. One of the reasons for this
speculation is the service time of the front-end server. This
either causes a delay or reduces peak of any burst reaching
the back-end servers. This behavior of the back-end servers
is investigated by looking at the time-series obtained from
the utilization of the servers.
H-parameter values of 0.87 and 0.77 were obtained for
the utilization of the main database server and the catalog
server respectively. The burstiness observed at the back-end
servers is more than the front-end servers
results have been observed in the B2B space also. The
utilization of the database server of the B2B site is shown
in
Figure
14. It can be observed that the load on the system
reaches 100% around the 4000th bucket. This is due
to the update activity which takes place periodically in most
e-commerce sites. The actual time when this takes place is
around 1.00 pm in the night. Similar activity can be seen
in the other back-end servers. Nothing can be observed at10305070900 1000 2000 3000 4000 5000
Utilization
Time (bucket 1 sec)
Figure
14: Processor Utilization of the B2B Database server1000300050000 1000 2000 3000 4000 5000 6000
File
Operations
sec
Time (5 secs)
Figure
15: File Operations per second from Main DB server
(5sec)
the front-end servers, as the bulk of the data which needs
any maintenance is present in the back-end servers only.
The back-end server in B2B space is also found to be more
bursty than the front-end server. This contradicts previous
assumptions about burstiness at the back-end servers in web
environment.
4.4.2 Disk Accesses
The B2C site has four disks for the Main D/B system. Disk
are used for the study instead of disk utilization.
Reliable data could not be obtained for the disk utilization
due to the presence of a cluster of four disks.
Figures
15, shows the distribution of the file request rate
at the Main D/B server. This shows the arrival rate of file
requests seen by the four hard disks. Figure 16 shows the1030500 1000 2000 3000 4000 5000 6000
Avg.
Queue
Length
Time (5 secs)
Figure
Queue Length at Main DB server (5sec)
average queue length seen by the hard disks at the Main
D/B server. The average queue length is found to be self-similar
in nature with This would result in a
heavy-tailed behavior in the average response-time of the
hard disk. The reason for the burstiness in the queue length
can be attributed to the arrival of file transfers at the hard
disk. This rate is also found to be bursty in nature with H
0.83. The buffer cache does not appear to be effective
since the hard disk is experiencing requests at this level of
burstiness.
In the previous section, the response-time at the front-end
is found to be heavy-tailed in nature even though the request
and response size did not follow this distribution. The burstiness
in the service time at the back-end was attributed to this
behavior. Here it can be seen that the heavy-tailed distribution
of response-time at the back-end is due to the bursty
arrival process to the hard disks, causing the queue length
to be bursty. This high burstiness in queue length will remove
the effect file sizes may have on the transfer times.
This conclusion also supports the previous speculation that
file-sizes were not a good representation of response-times
in e-commerce environment.
5 Conclusion
Aggregated traffic arriving at B2B and B2C e-commerce
servers is characterized in this paper. Access logs from the
web servers is collected for application level information,
Microsoft performance logs were collected for system level
information and processor counters were collected for architectural
information. Information from this data was used
to understand the load behavior of the traffic for a normal
weekday. Only a specific set of parameters (arrival pro-
cess, utilization, response-time, transfer sizes etc.) which
would impact the system to the maximum extent were used
for characterization of the workload.
Self-similar nature of the traffic was established using
Hurst-parameter as a measure of degree of self-similarity.
Two different tests were used for measuring the Hurst-
parameter: AV-estimator and the R/S plot. It was observed
that the load behavior of the two sites was complementary
in nature with traffic load shifting from one type of e-commerce
site to the other during the later part of the day.
Unlike previous speculation, the back-end server was found
more bursty than the front-end server, this was attributed to
the fractal nature of the service time at the back-end. At both
the sites, the response-times were found to be heavy-tailed
in nature, complying to the results found in web environ-
ment. But in the B2C environment, highly bursty arrival of
file requests was seen at the disks. It was found that this
arrival process is causing high queuing delays at the disk reducing
the impact of disk transfer time as compared to the
queuing time. This increased the burstiness in the overall
response-time seen at the front-end server.
This work provides an understanding of the complexity
of the traffic arriving at e-commerce sites while providing a
preliminary workload characterization.
--R
"Predicting the performance of an e-commerce server: Those mean percentiles,"
"An admission control scheme for predictable server response time for web ac- cesses,"
"Cisco and microsoft e-commerce framework architec- ture."
"Self-similarity in world-wide traffic : Evidence and possible causes,"
Introduction to computer system performance eval- uation
"Server capacity planning for web traffic workload,"
"Resource management policies for e-commerce servers,"
"Web server workload characterization: The search for invariants,"
"Generating representative web workloads for network and server performance evaluation,"
"A wavelet based joint estimator for the parameters of lrd,"
"On multimedia networks: Self-similar traffic and network performance,"
Capacity Planning for Web Performance: Metrics
--TR
--CTR
Lance Titchkosky , Martin Arlitt , Carey Williamson, A performance comparison of dynamic Web technologies, ACM SIGMETRICS Performance Evaluation Review, v.31 n.3, p.2-11, December | business-to-consumers B2C;self-similarity and web-servers;bussines-to-business B2B;traffic characterization;e-commerce servers |
608045 | Verifying lossy channel systems has nonprimitive recursive complexity. | Lossy channel systems are systems of finite state automata that communicate via unreliable unbounded fifo channels. It is known that reachability, termination and a few other verification problems are decidable for these systems. In this article we show that these problems cannot be solved in primitive recursive time. | Introduction
Channel systems, also called Finite State Communicating Machines, are systems of nite state
automata that communicate via asynchronous unbounded fo channels [Boc78, BZ83]. Figure
1 displays an example, where the labels c!x and c?x mean that message x (a letter) is sent
to (respectively read from) channel c. Channel systems are a natural model for asynchronous
channel c 1
b a a b a
channel c 2
a c
Figure
1: A channel system with two automata and two channels
communication protocols and constitute the semantical basis for ISO protocol specication
languages such as SDL and Estelle. Channel systems are Turing powerful, and no verication
method for them can be general and fully algorithmic.
A few years ago, Abdulla and Jonsson identied lossy channel systems as a very interesting
model: in lossy channel systems messages can be lost while they are in transit, without
any notication. These systems are very close to the completely specied protocols independently
introduced by Finkel, and for which he showed the decidability of termination.
Abdulla and Jonsson showed that reachability, safety properties over traces, and eventuality
properties over states are decidable for lossy channel systems. The decidability results
of [Fin94, CFP96, AJ96b] are fundamental since lossy systems are the natural model for
fault-tolerant protocols where the communication channels are not supposed to be reliable
(see [AKP97, ABJ98, AAB99] for applications).
For lossy channel systems, the aforementioned decidability results lead to algorithms whose
termination rely on Higman's Lemma (see [A
CJT00, FS01] for more examples of this phe-
nomenon). No complexity bound is known and, e.g., Abdulla and Jonsson stated in [AJ96b]
that they could not evaluate the cost of their algorithm.
In this article we show that all the above-mentioned decidable problems have nonprimitive
recursive complexity, i.e., cannot be solved by algorithms with running time bounded by a
primitive recursive function of their input size. This puts these problems among the hardest
decidable problems.
Our proof relies on a simple construction showing how lossy channel systems can weakly
compute some fast growing number-theoretic functions A related to Ackermann's
function and their inverses A 1
\weakly computing f" we mean that, starting
from x, all values between 0 and f(x) can be obtained. This notion was used by Rabin in his
proof that equality of the reachability sets of Petri nets is undecidable, a proof based on the
weak computability of multivariate polynomials (see [Hac76]). Petri nets can weakly compute
the A n functions (see [MM81]) but they cannot weakly compute their inverses A 1
n as lossy
channel systems can do.
There exist other families of systems that can weakly compute both A n and A 1
lossy counter machines of [May00], or the reset nets of [DFS98, DJS99]. Our construction can
easily be adapted to show that, for these systems too, decidable problems like termination,
control-state reachability, . , are nonprimitive recursive.
Finally, let us observe that there does not exist many uncontrived problems that have been
shown decidable but not primitive recursive. In the eld of verication, we are only aware
of one instance: the \nite equivalence problem for Petri nets" 1 introduced by Mayr and
Meyer [MM81]. This problem is, given two Petri nets, to decide whether they both have the
same set of reachable markings and this set is nite (equivalence is undecidable without the
niteness assumption). It can be argued that the verication (termination or reachability) of
lossy channel systems is a less contrived problem.
Channel systems, from perfect to lossy
A channel system usually combines several nite-state automata that communicate through
several channels. Here, and without loss of generality, we assume our systems only have one
1 See [Jan01] for a more general proof that all nite equivalence problems for Petri nets are nonprimitive
recursive.
automaton that uses its several channels as fo buers.
Formally, a channel system is a tuple is a nite set of
control states, is a nite set of channels, is a nite alphabet
of messages, and Q C f?; !g Q is a nite set of transition rules (see below).
A conguration of S is a tuple
denoting that control is currently in
state q, while channels c 1 to c k contain words w (from ).
The transition rules in state how S can move from a conguration to another. For-
mally, S has a \perfect" step
is some hq; w
0 is some
and (3.1) there is a rule (q; c
has been written to the tail of c i ) or (3.2) there is a rule (q; c
has been read from the head of c i ). These steps are called perfect because no message is lost.
It is well known that, assuming perfect steps, channel systems can faithfully simulate
Turing machines in quadratic time [BZ83] (a single channel is enough to replace a Turing
machine work tape; reading and writing in the middle of the channel requires rotating the
content of the channel for positioning reasons, hence the quadratic overhead). Thus all
interesting verication problems are undecidable for systems with perfect channels, even when
restricted to single-channel systems.
2.1 Lossy systems
The most elegant and convenient way to model lossy channel systems is to see them as channel
systems with an altered notion of steps [AJ96b].
We write u v v when u is a subword of v, i.e. u can be obtained by deleting any number
(including of letters from v. E.g. abba v abracadabra as indicated by the underlining. The
subword ordering extends to congurations: hq; w
Lemma [Hig52], this gives a a well-quasi-ordering:
Lemma 2.1 Every innite sequence
congurations contains an innite increasing
subsequence
When
we write
0 and say that S may evolve from
to
0 by losing messages.
The steps of a lossy channel system are all
congurations - 0 (i.e. losses may occur before and after a perfect step is performed).
Note that a perfect step is a special case of a lossy step. A run is a sequence
of chained lossy steps. A perfect run is a run that uses perfect steps only. We use
and
0 to denote the existence of a nite run (resp. perfect run) that goes from
to
We are interested in the following two problems:
Given a channel system S and an initial conguration
0 , are all runs from
Reachability: Given a channel system S and two congurations
0 and
f , is there a run
from
0 to
Theorem 2.2 [Fin94, AJ96b]. Termination and reachability are decidable for lossy channel
systems.
In the remaining of this note we show
Theorem 2.3 Termination and reachability for lossy channel systems have nonprimitive recursive
complexity.
Theorem 2.3 also applies to the other verication problems that are known decidable for
lossy channel systems. Indeed, termination is an instance of inevitability (shown decidable
in [AJ96b]). Reachability is easily reduced to control-state reachability (shown decidable
in [AJ96b]). Finally, termination can be reduced to simulation with a nite-state system 2
(shown decidable in [AK95, A
CJT00]). Thus we are entitled to claim that \verifying lossy
channel systems has nonprimitive recursive complexity". Note that there exist many undecidable
problems for lossy channel systems [AK95, CFP96, AJ96a, May00, ABPJ00, Sch01].
3 The main construction
3.1 Ackermann's function
Let be the following sequence of functions over the natural numbers:
A
| {z }
times
2. (2)
Thus A n is followed by
A 3
A 4 (a tower of k 2's) (4)
and so on. The A n 's are monotonic and expansive in the following sense: for any n 2 and
We dene inverse functions
Observe that the A 1
n 's are partial functions. Another way to understand these functions is
to notice that
| {z }
times
There exists many versions of Ackermann's function. One possible denition is Ack(n)
A n (2). It is well-known that Ack(n) dominates any primitive recursive function of n. Thus it
follows from classical complexity-theoretic results that halting problems for Turing machines
running in time or space bounded by Ack(n) (n being the size of the input) cannot be decided
in primitive recursive time or space.S terminates i it is not simulation-equivalent with a simple loop.
3.2 Weakly computing A n with expanders
We construct a family . of expanders, channel systems that weakly compute A 2 (k),
A 3 (k), . As illustrated in Fig. 2, E n , the nth expander, uses n channels: c 1 is the \output
channel c n
channel c n 1
channel c 1
Figure
2: Interface for expander E n
channel" (in which the system will write the result A n (k)), c n is the \input channel" (from
which the argument k is read), and channels c 2 to c n 1 are used to store auxiliary results.
E n has one starting and one ending state, called s n and f n respectively.
These systems use a simple encoding for numbers: k 2 N is encoded as a string dke over
the alphabet 0
eg. Formally, dke
k e is made of k letters \1" surrounded by
one \b"egin and one \e"nd marker. For example, the channels in Fig. 2 contain respectively
d0e, . , d0e, and d4e.
Before explaining the construction of the expanders, we describe the simple transferring
devices This illustrates the way encodings dke of numbers are used.
The T n 's, dened in Fig. 3, start in state t n and end in state x n after they have transfered the
contents of channel c 1 into channel c n (assuming c 1 contains at least d1e). In Fig. 3 (and in
future constructions) we sometimes omit depicting all intermediate states when their names
are not required in the proof.
Figure
3: T n , channel system for transferring c 1 into c n (n > 1)
Formally, the lossy behaviors of T n can be characterized by:
Proposition 3.1 For all n 2,
a 1
Proof. Omitted.
Note that the design of T n ensures that it blocks when c 1 contains d0e. This property is
used in the proof of Lemma 3.4.
We now move to the expanders themselves. The internals of E n are given in Fig. 4.
Figure
4: Expanders
For subsystems. This is because E n implements
equation (2): every time a 1 is consumed from c n , run and the result is transfered
(by can be applied again.
It is easy to convince oneself that the perfect (non-lossy) behavior of E n is to compute
A n in the following formal sense:
and
Indeed, for n > 2 a perfect run has the following form (where congurations are displayed in
vector
d0e
d0e
dke
d1e
d0e
dke
d1e
d0e
eb
d1e
d0e
eb
d0e
d1e
eb
d0e
d1e
eb
d0e
eb
d0e
eb
d0e
eb
d0e
eb
d0e
d0e
When perfect behavior is not assumed, E n still computes A n , this time in a weak sense,
according to the following statement:
Proposition 3.2 For all n 2,
a 1 A n (k)
and
Proof. The \(" direction is an easy consequence of (9). A detailed proof of the \)" direction
can be found in the Appendix but the underlying idea is simple: First, if some 1s are lost
at any time during the computation, the nal result will end up being smaller because of the
monotonicity properties (5). Then, if a b or a e marker is lost, the system can never recover
it and will fail to reach a conguration where all channels contain encodings of numbers.
Note that the dierence between (9) and (10) does not only come from the replacement
of a \= A n (k)" by a \ A n (k)": (9) guarantees that the w i s are encodings of numbers, while
assumes this.
3.3 Weakly computing A 1
n with folders
Folder systems F are channel systems that weakly compute the A 1
's. Here channel
c 1 is the input channel and c n is the output, while channels c 2 to c n 1 store auxiliary results.
The denition of the F n 's, given in Fig. 5, is based on (7). It uses transferring systems T 0
these systems are variants of the T n 's and move the contents of channel c n into channel c 1
(instead of the other way around).
When possibly lossy behaviors are considered, F n weakly computes A 1
n in the following
formal sense:
Proposition 3.3 For all n 2,
Figure
5: Folders F
Proof. We prove the \)" direction in the Appendix and omit the easier \(" direction.
One additional property of the construction will be useful in the following:
Lemma 3.4 E n and F n have no innite run, regardless of the initial conguration.
Proof. We rst deal with the E n 's by induction over n. E 2 terminates since its loop consumes
from c 2 . For n > 2, a run of E n cannot visit q n innitely often (this would consume innitely
from c n ) and, by ind. hyp., cannot contain an innite subrun of (obviously, the T n 1
part must terminate too).
For the F n 's, we rst observe (again using induction on n) that, if c 1 contains at least one
1, then a run from s 0
n to f 0
removes strictly more 1's than it writes back. Finally, traversing
must move some 1's to c 1 (or lose them).
4 The hardness results
Expander and folder systems can now be used to prove Theorem 2.3.
4.1 Hardness for reachability
Let's consider a Turing Machine (a TM) M that is started on a blank work tape of size m
(for some m) and that never goes beyond the allocated workspace. One can build a channel
system S that simulates M using as workspace channel c 1 initially lled with b1
e. We do
not describe further the construction of S since it follows the standard simulation of TM's by
channel systems (from [BZ83]) 3 . Since M never goes beyond the allocated workspace, the
3 And since TM's are not really required and could be replaced by perfect channel systems that preserve
the size of the channel contents.
transition rules of S always write exactly as many messages as they read, so that if no loss
occurs the channel always contains the same number of letters. The resulting system has two
types of runs: perfect runs where M is simulated faithfully, and lossy runs that do not really
simulate M but where messages have been irremediably lost.
Now, in order to know whether M accepts in space m, there only remains to provide S
with enough workspace, and to look at runs where no message is lost. This is exactly what
is done by S n
, depicted in Fig. 6, where expander E n provides a potentially large dme in
channel c 1 and folder F n is used to check that no message has been lost.
Simulation of M using c 1
as bounded workspace
start
accept
Figure
simulating Turing machine M with huge workspace
Thus any run of S n
M of the form hs
perfect and
must visit both hstart; dAck(n)e; Hence
Proposition 4.1 hs
accepts in space
Ack(n).
Therefore, since S n
M has size O(n + jM j), reachability for lossy channel systems is at least as
hard as termination for TM's running in Ackermann space. Hence
Corollary 4.2 Reachability for lossy channel systems has nonprimitive recursive complexity.
4.2 Hardness for termination
The second hardness result uses a slight adaptation of our previous construction, and relies
on the following simulation (Fig. 7).
Here S 0n
lls two channels with dAck(n)e: c 1 used as before as working space, and c 0
used as a countdown that ensures termination of the simulation of M . Every time one step
of M is simulated, S 0n
consumes one 1 from c 0 . When the accepting state of M is reached,
M moves to s 0
n where it uses F n to check that c 1 does contain dAck(n)e (i.e. the simulation
was faithful). If the check succeeds, S 0n
can enter a loop. Therefore, a run of S 0n
terminates
when the simulation is not perfect, or when M does not accept in at most Ack(n) steps.
Proposition 4.3 S 0n
M has an innite run from hs n accepts in time
Ack(n).
Proof. Clearly, S 0n
M can only reach the nal loop if the simulation is faithful, and halts
in at most Ack(n) steps. There remains to show that the unfaithful, lossy, behaviors are
deals with the E n and F n part of S 0n
M , the duplication gadget (be-
tween f n and start) obviously terminates, and we solve the problem for the simulation part
by programming it in such a way that the rotation of the tape (necessary for simulating a
(duplicating c 1 in c 0 )
start
loop
Simulation of M using c 1 as bounded
workspace and c 0 as bounded time
accept
Figure
7: S 0n
another simulation of M with huge workspace
TM) cannot induce non-termination. One way 4 to achieve this is to use two copies (one
positive and one negative) of the TM alphabet: in \+" mode, the simulation reads +-letters
and writes back their -twin. Only when an actual TM step is performed does S 0n
from \+" mode to \ " mode and vice versa. More details can be found in section 5 where
the same trick is used.
This shows that termination for lossy channel systems is at least as hard as termination
for TM's runnings in Ackermann time. Hence
Corollary 4.4 Termination for lossy channel systems has nonprimitive recursive complexity.
5 Systems with only one channel
Our construction used several channels for clarity, not out of necessity, and our result still
holds when we restrict ourselves to lossy channel systems with only one channel. This is one
more application of the slogan \lossy systems with k channels can be encoded into lossy systems
with one channel". The encoding given in [AJ96a, Section 4.5] preserves the existence
of runs that visit a given control state innitely often. Below we give another encoding that
further preserves termination and reachability. It uses standard techniques (e.g. from the
study of TM's with k tapes) and the only original aspect is the lossy behavior of our systems.
Consider a system that uses channels c We simulate S by a
system uses one single channel c. Without loss of generality we
assume a dierent subalphabet is used with every channel of S (i.e. is partitioned in disjoint
alphabets The encoding uses a larger alphabet where k markers #
have been added, and where every letter comes in two copies (a positive and a negative one).
Formally and a pair (x; shortly x . For
an occurrence of some x in c means \one x in c i " (and the polarity is only used
for bookkeeping purposes).
4 An alternative solution would use c0 as a countdown for channel system steps rather than TM steps, but
Prop. 4.3 would have to be reworded in a clumsy way.
A k-tuple hw
k is encoded as
same polarity is used to label all letters. For example, hab; "; ddci is coded as
under positive polarity.
Fig. 8 shows how two example rules from (left-hand side) are encoded in S 0 (right-hand
side). On this gure, four loops (marked by ) provide for the rotation of the contents of c
c!x
c?x
c!x
c?x
Figure
8: From S to encoding k channels into one
(with a change of polarity): they are shorthands for several loops as indicated by the \for all
x" comment. A state q from S gives rise to two copies q + and q in positive
letters and writes negative letters, thus preventing non-termination induced by the rotation
loops, q does the converse, and S 0 changes polarity each time a step of S has been simulated.
Now, if W and V are encodings of hw
g.
This extends to lossy behaviors:
S terminates from hq; w
Note that these equivalences only hold for W and V that are correct encodings of k-tuples,
and for q; q 0 that are original states from Q. S 0 has behaviors that do not correspond to
behaviors of S in the sense of (12), e.g. when it gets blocked in new states, or when it loses
one of the # i markers.
Corollary 5.1 Reachability and termination for lossy single-channel systems have nonprimitive
recursive complexity.
6 Conclusion
There exist several constructions in the literature where a problem P is shown undecidable
for lossy channel systems by simulating a Turing machine in such a way that the faithfulness
of the simulation can be ensured, or checked, or rewarded in some way. Our construction
uses similar tricks since it rst builds a nonprimitive recursive number of messages, and later
checks that none has been lost. Still, there are a number of new aspects in our construction,
and this explains why it is the rst complexity result for decidable problems on lossy channel
systems.
Acknowledgements
We are grateful to Alain Finkel who brought this topic to our attention, to Petr Jancar who
suggested major improvements on an earlier draft, and to the anonymous referee who helped
us improve section 5.
A
Appendix
Proof of Prop. 3.2
The di-culty with the \)" direction of Prop. 3.2 is that we have to consider lossy behaviors
that need not respect the logic of the E n systems we designed. However, when we restrict our
attention to behaviors that do not lose the b and e markers, managing the problem becomes
feasible.
We start with the following lemma:
Lemma A.1 If a run hs
is such that every w i
contains one b and one e, then every w i has the form b1 e, i.e. encodes a number.
The same holds for a run ht
Proof. Since our systems always write a b or a e after they consumed one, saying that every
contains one b and one e means that losses did not concern the markers. Therefore, it
only remains to prove that the pattern b1 e is respected, even when some 1's have been lost.
has to consume all of da 1 e and da 2 e and what it writes in c 1 and c 2 encodes a number
even after losses. The same reasoning applies to the channels c 1 and c n of T n , while the other
channels are untouched (and losses there respect the b1 e pattern).
For proceed by induction over n. For c n , observe that all of a n is
consumed and replaced by d0e. For the other channels (c 1 to c n 1 ), they all contain a number
when the run rst reaches q n and, since the lemma holds for
this remains the case every time the run revisits q n , until it eventually reaches f n . Then all
channels contain encodings of numbers.
We are now ready to prove that
a 1 A n (k); and
by induction over n. The base case is left to the reader: a simple inspection of E 2
shows it weakly computes A 2
For n > 2 we consider a run
and isolate the congurations where (14) visits q n and f n 1 by writing it under the form
Since the b and e markers are not lost in (14), we can state that all w i
n have the formk i eb (resp. 1
eb). Since the transition leaving q n for t n 1 consumes one 1 from c n , one sees
that
implying m k. Finally a can only be reached by consuming e from c n ) and
this concludes the proof for the part that concerns c n .
When we consider the other channels (c 1 to c n 1 ) the proof of Lemma A.1 shows that,
i all are encodings of numbers.
Furthermore, they satisfy
| {z }
times
(1)e and w j
2:
as we prove by induction on j. The base case, (W 0 ) is a consequence of the assumption (14).
Then one shows that (W j ) entails (V j+1 ) using Prop. 3.1. Finally, one proves that
using
One concludes the proof of (H n ) by observing that m k and (Wm ) entail the right hand
side of (H n ) because of the monotonicity and expansion properties of A n stated in (5).
Appendix
Proof of Prop. 3.3
For the \)" direction, we proceed as with the proof of Prop. 3.2, and start with a result
mimicking Lemma A.1:
Lemma B.1 If a run hs 0
is such that every w i
contains one b and one e, then every w i encodes a number.
Proof. Omitted.
We now prove
by induction over n. The base is easy to see. For n > 2, we consider a run
isolate the congurations where it visits
writing it under the form
We start with the contents of c n : the w i
n have the form eb1 k i and, resp., eb1 k 0
. The
transition from q 0
n to s 0
one 1 to c n , so that, for
l a n we deduce l a n .
When we consider the contents of the other channels, the w j
's and v j
's encode numbers for
Using (H 0
shows, by induction on i, that w j
so that a
writing
so that, by monotonicity of A n 1 ,
| {z }
l times
(because nally leaving q 0
consumes d1e from c n 1
entail that A n (a n ) m, completing the proof.
--R
Symbolic veri
Reasoning about probabilistic lossy channel systems.
Undecidable veri
Verifying programs with unreliable channels.
Decidability of simulation and bisimulation between lossy channel systems and
An improved search strategy for lossy channel systems.
Finite state description of communication protocols.
Reset nets between decidability and undecidability.
Decidability of the termination problem for completely speci
Well structured transition systems everywhere!
The equality problem for vector addition systems is undecidable.
Ordering by divisibility in abstract algebras.
Undecidable problems in unreliable computations.
The complexity of the
Bisimulation and other undecidable equivalences for lossy channel systems.
--TR
Unreliable channels are easier to verify than perfect channels
Undecidable verification problems for programs with unreliable channels
The Complexity of the Finite Containment Problem for Petri Nets
On Communicating Finite-State Machines
Algorithmic analysis of programs with well quasi-ordered domains
Nonprimitive recursive complexity and undecidability for Petri net equivalence
Well-structured transition systems everywhere!
Bisimulation and Other Undecidable Equivalences for Lossy Channel Systems
An Improved Search Strategy for Lossy Channel Systems
Boundedness of Reset P/T Nets
Reset Nets Between Decidability and Undecidability
Undecidable Problems in Unreliable Computations
Symbolic Verification of Lossy Channel Systems
Reasoning about Probabilistic Lossy Channel Systems
Decidability of Simulation and Bisimulation between Lossy Channel Systems and Finite State Systems (Extended Abstract)
On-the-Fly Analysis of Systems with Unbounded, Lossy FIFO Channels
--CTR
Giorgio Delzanno, Constraint-based automatic verification of abstract models of multithreaded programs, Theory and Practice of Logic Programming, v.7 n.1-2, p.67-91, January 2007
Alexander Rabinovich, Quantitative analysis of probabilistic lossy channel systems, Information and Computation, v.204 n.5, p.713-740, May 2006
P. A. Abdulla , N. Bertrand , A. Rabinovich , Ph. Schnoebelen, Verification of probabilistic systems with faulty communication, Information and Computation, v.202 n.2, p.141-165, 1 November, 2005
Blaise Genest , Dietrich Kuske , Anca Muscholl, A Kleene theorem and model checking algorithms for existentially bounded communicating automata, Information and Computation, v.204 n.6, p.920-956, June 2006
Roberto M. Amadio , Charles Meyssonnier, On decidability of the control reachability problem in the asynchronous -calculus, Nordic Journal of Computing, v.9 n.2, p.70-101, Summer 2002
Grard Cc , Alain Finkel, Verification of programs with half-duplex communication, Information and Computation, v.202 n.2, p.166-190, 1 November, 2005
Antonn Kuera , Philippe Schnoebelen, A general approach to comparing infinite-state systems with their finite-state specifications, Theoretical Computer Science, v.358 n.2, p.315-333, 7 August 2006
Antonn Kuera , Petr Janar, Equivalence-checking on infinite-state systems: Techniques and results, Theory and Practice of Logic Programming, v.6 n.3, p.227-264, May 2006 | verification of infinite-state systems;communication protocols;program correctness;formal methods |
608119 | Combining Classifiers with Meta Decision Trees. | The paper introduces meta decision trees (MDTs), a novel method for combining multiple classifiers. Instead of giving a prediction, MDT leaves specify which classifier should be used to obtain a prediction. We present an algorithm for learning MDTs based on the C4.5 algorithm for learning ordinary decision trees (ODTs). An extensive experimental evaluation of the new algorithm is performed on twenty-one data sets, combining classifiers generated by five learning algorithms: two algorithms for learning decision trees, a rule learning algorithm, a nearest neighbor algorithm and a naive Bayes algorithm. In terms of performance, stacking with MDTs combines classifiers better than voting and stacking with ODTs. In addition, the MDTs are much more concise than the ODTs and are thus a step towards comprehensible combination of multiple classifiers. MDTs also perform better than several other approaches to stacking. | Introduction
The task of constructing ensembles of classiers [8] can be broken down into two sub-tasks.
We rst have to generate a diverse set of base-level classiers. Once the base-level classiers
have been generated, the issue of how to combine their predictions arises.
Several approaches to generating base-level classiers are possible. One approach is to
generate classiers by applying dierent learning algorithms (with heterogeneous model
representations) to a single data set (see, e.g., Merz [14]). Another possibility is to apply
a single learning algorithm with dierent parameters settings to a single data set. Finally,
methods like bagging [5] and boosting [9] generate multiple classies by applying a single
learning algorithm to dierent versions of a given data set. Two dierent methods for manipulating
the data set are used: random sampling with replacement (also called bootstrap
sampling) in bagging and re-weighting of the misclassied training examples in boosting.
Techniques for combining the predictions obtained from the multiple base-level classiers
can be clustered in three combining frameworks: voting (used in bagging and boost-
ing), stacked generalization or stacking [22] and cascading [10]. In voting, each base-level
classier gives a vote for its prediction. The prediction receiving the most votes is the
nal prediction. In stacking, a learning algorithm is used to learn how to combine the
predictions of the base-level classiers. The induced meta-level classier is then used to
obtain the nal prediction from the predictions of the base-level classiers. Cascading
is an iterative process of combining classiers: at each iteration, the training data set is
extended with the predictions obtained in the previous iteration.
The work presented here focuses on combining the predictions of base-level classiers
induced by applying dierent learning algorithms to a single data set. It adopts the stacking
framework, where we have to learn how to combine the base-level classiers. To this end,
it introduces the notion of meta decision trees (MDTs), proposes an algorithm for learning
them and evaluates MDTs in comparison to other methods for combining classiers.
Meta decision trees (MDTs) are a novel method for combining multiple classiers. The
dierence between meta and ordinary decision trees (ODTs) is that MDT leaves specify
which base-level classier should be used, instead of predicting the class value directly. The
attributes used by MDTs are derived from the class probability distributions predicted by
the base-level classiers for a given example. We have developed MLC4.5, a modication of
C4.5 [17], for inducing meta decision trees. MDTs and MLC4.5 are described in Section 3.
The performance of MDTs is evaluated on a collection of twenty-one data sets. MDTs
are used to combine classiers generated by ve base-level learning algorithms: two tree-
learning algorithms C4.5 [17] and LTree [11], the rule-learning algorithm CN2 [7], the
k-nearest neighbor (k-NN) algorithm [20] and a modication of the naive Bayes algorithm
[12]. In the experiments, we compare the performance of stacking with MDTs to the
performance of stacking with ODTs. We also compare MDTs with two voting schemes
and two other stacking approaches. Finally, we compare MDTs to boosting and bagging
of decision trees as state of the art methods for constructing ensembles of classiers.
Section 4 reports on the experimental methodology and results. The experimental
results are analyzed and discussed in Section 5. The presented work is put in the context of
previous work on combining multiple classiers in Section 6. Section 7 presents conclusions
based on the empirical evaluation along with directions for further work.
Combining Multiple Classiers
In this paper, we focus on combining multiple classiers generated by using dierent learning
algorithms on a single data set. In the rst phase, depicted on the left hand side of
Figure
1, a set of N base-level classiers is generated by applying the
learning algorithms A 1 ; A to a single training data set L.
training
set
"!
"!
A
A
A
A
A
A
A A
AN
CN
A
A
A
A
A
A
A A
x
new
example
CN
CML
Figure
1: Constructing and using an ensemble of classiers. Left hand side: generation of
the base-level classiers by applying N dierent learning algorithms
to a single training data set L. Right hand side: classication of a new example x using
the base-level classiers in C and the combining method CML .
We assume that each base-level classier from C predicts a probability distribution over
the possible class values. Thus, the prediction of the base-level classier C when applied
to example x is a probability distribution vector:
is a set of possible class values and p C (c i jx) denotes the probability
that example x belongs to class c i as estimated (and predicted) by classier C. The class
c j with the highest class probability p C (c j jx) is predicted by classier C.
The classication of a new example x at the meta-level is depicted on the right hand
side of Figure 1. First, the N predictions fp C1 (x); p C2 of the base-level
classiers in C on x are generated. The obtained predictions are then combined using
the combining method CML . Dierent combining methods are used in dierent combining
frameworks. In the following subsections, the combining frameworks of voting and stacking
are presented.
2.1 Voting
In the voting framework for combining classiers, the predictions of the base-level classiers
are combined according to a static voting scheme, which does not change with training data
set L. The voting scheme remains the same for all dierent training sets and sets of learning
algorithms (or base-level classiers).
The simplest voting scheme is the plurality vote. According to this voting scheme, each
base-level classier casts a vote for its prediction. The example is classied in the class
that collects the most votes.
There is a renement of the plurality vote algorithm for the case where class probability
distributions are predicted by the base-level classiers [8]. Let p C (x) be the class probability
distribution predicted by the base-level classier C on example x. The probability
distribution vectors returned by the base-level classiers can be summed to obtain the class
probability distribution of the meta-level voting classier:
The predicted class for x is the class c j with the highest class probability p CML
2.2 Stacking
In contrast to voting, where CML is static, stacking induces a combiner method CML
from the meta-level training set, based on L, in addition to the base-level classiers. CML
is induced by using a learning algorithm at the meta-level: the meta-level examples are
constructed from the predictions of the base-level classiers on the examples in L and the
correct classications of the latter. As the combiner is intended to combine the predictions
of the base-level classiers (induced on the entire training set L) for unseen examples,
special care has to be taken when constructing the meta-level data set. To this end, the
cross-validation procedure presented in Table 1 is applied.
Table
1: The algorithm for building the meta-level data set used to induce the combiner
CML in the stacking framework.
function build combiner(L, fA
stratified partition(L; m)
do
to N do
Let C j;i be the classier obtained by applying A j to L n L i
Let CV j;i be class values predicted by C j;i on examples in L i
Let CD j;i be class distributions predicted by C j;i on examples in L i
endfor
endfor
Apply AML to LML in order to induce the combiner CML
return CML
endfunction
First, the training set L is partitioned into m disjoint sets
equal size. The partitioning is stratied in the sense that each set L i roughly preserves the
class probability distribution in L. In order to obtain the base-level predictions on unseen
examples, the learning algorithm A j is used to train base-level classier C j;i on the training
set L n L i . The trained classier C j;i is then used to obtain the predictions for examples
in L i . The predictions of the base-level classiers include the predicted class values CV j;i
as well as the class probability distributions CD j;i . These are then used to calculate the
meta-level attributes for the examples in L i .
The meta-level attributes are calculated for all N learning algorithms and joined together
into set of meta-level examples. This set is one of the m parts of the meta-level
data set LML . Repeating this procedure m times (once for each set L i ), we obtain the
whole meta-level data set. Finally, the learning algorithm AML is applied to it in order to
induce the combiner CML .
The framework for combining multiple classiers used in this paper is based on the
combiner methodology described in [6] and the stacking framework of [22]. In these two ap-
proaches, only the class values predicted by the base-level classiers are used as meta-level
attributes. Therefore, the meta level attributes procedure used in these frameworks
is trivial: it returns only the class values predicted by the base-level classiers. In our
approach, we use the class probability distributions predicted by the base-level classiers
in addition to the predicted class values for calculating the set of meta-level attributes.
The meta-level attributes used in our study are discussed in detail in Section 3.
3 Meta Decision Trees
In this section, we rst introduce meta decision trees (MDTs). We then discuss the possible
sets of meta-level attributes used to induce MDTs. Finally, we present an algorithm for
inducing MDTs, named MLC4.5.
3.1 What are Meta Decision Trees
The structure of a meta decision tree is identical to the structure of an ordinary decision
tree. A decision (inner) node species a test to be carried out on a single attribute value
and each outcome of the test has its own branch leading to the appropriate subtree. In a
leaf node, a MDT predicts which classier is to be used for classication of an example,
instead of predicting the class value of the example directly (as an ODT would do).
The dierence between ordinary and meta decision trees is illustrated with the example
presented in Tables 2 and 3. First, the predictions of the base-level classiers (Table 2a)
are obtained on the given data set. These include predicted class probability distributions
as well as class values themselves. In the meta-level data set M (Table 2b), the meta-level
attributes C 1 and C 2 are the class value predictions of two base-level classiers C 1
and C 2 for a given example. The two additional meta-level attributes Conf 1 and Conf 2
measure the condence of the predictions of C 1 and C 2 for a given example. The highest
class probability, predicted by a base-level classier, is used as a measure of its prediction
condence.
Table
2: Building the meta-level data set. a) Predictions of the base-level classiers. b)
The meta-level data set M .
a) Predictions of the base-level classiers
Base-level attributes (x) p C1 (0jx) p C1 (1jx) pred. p C2 (0jx) p C2 (1jx) pred.
. 0.875 0.125 0 0.875 0.125 0
.
.
.
.
.
.
.
b) Meta-level data set M
The meta decision tree induced using the meta-level data set M is given in Table 3a).
The MDT is interpreted as follows: if the condence Conf 1 of the base-level classier C 1 is
high, then C 1 should be used for classifying the example, otherwise the base-level classier
should be used. The ordinary decision tree induced using the same meta-level data
set M (given in Table 3b) is much less comprehensible, despite the fact that it re
ects
the same rule for choosing among the base-level predictions. Note that both the MDT
and the ODT need the predictions of the base-level classiers in order to make their own
predictions.
Table
3: The dierence between ordinary and meta decision trees. a) The meta decision
tree induced from the meta-level data set M (by MLC4.5). b) The ODT induced from the
same meta-level data set M (by C4.5). c) The MDT written as a logic program.
a) The MDT induced from M
Conf1 <= 0.625: C2
Conf1 > 0.625: C1
b) The ODT induced from M
| Conf1 > 0.625
| Conf1 <= 0.625:
| |
| |
| Conf1 > 0.625: 1
| Conf1 <= 0.625:
| |
| |
c) The MDT written as a logic program
The comprehensibility of the MDT from Table 3a) is entirely due to the extended
expressiveness of the MDT leaves. Both the MDT and the ODT in Table 3a) and b) are
induced from the propositional data set M . While the ODT induced from M is purely
propositional, the MDT is not. A (rst order) logic program equivalent to the MDT
is presented in Table 3c). The predicate combine(Conf 1 , C 1 , Conf 2 , C 2 , C) is used to
combine the predictions of the base-level classiers C 1 and C 2 into class C according to
the values of the attributes (variables) Conf 1 and Conf 2 . Each clause of the program
corresponds to one leaf node of the MDT and includes a non-propositional class value
assignment in the rst and in the second clause). In the propositional
framework, the only possible assignments are one for each class value.
There is another way of interpreting meta decision trees. A meta decision tree selects an
appropriate classier for a given example in the domain. Consider the subset of examples
falling in one leaf of the MDT. It identies a subset of the data where one of the base-level
classiers performs better than the others. Thus, the MDT identify subsets that are
relative areas of expertise of the base-level classiers. An area of expertise of a base-level
classier is relative in the sense that its predictive performance in that area is better as
compared to the performances of the other base-level classiers. This is dierent from an
area of expertise of an individual base-level classier [15], which is a subset of the data
where the predictions of a single base-level classier are correct.
Note that in the process of inducing meta decision trees two types of attributes are used.
Ordinary attributes are used in the decision (inner) nodes of the MDT (e.g., attributes
Conf 1 and Conf 2 in the example meta-level data set M ). The role of these attributes is
identical to the role of attributes used for inducing ordinary decision trees. Class attributes
(e.g., C 1 and C 2 in M ), on the other hand, are used in the leaf nodes only. Each base-level
classier has its class attribute: the values of this attribute are the predictions of the
base-level classier. Thus, the class attribute assigned to the leaf node of the MDT decides
which base-level classier will be used for prediction. When inducing ODTs for combining
classiers, the class attributes are used in the same way as ordinary attributes.
The partitioning of the data set into relative areas of expertise is based on the values of
the ordinary meta-level attributes used to induce MDTs. In existing studies about areas
of expertise of individual classiers [15], the original base-level attributes from the domain
at hand are used. We use a dierent set of ordinary attributes for inducing MDTs. These
are properties of the class probability distributions predicted by the base-level classiers
and re
ect the certainty and condence of the predictions. However, the original base-level
attributes can be also used to induce MDTs. Details about each of the two sets of
meta-level attributes are given in the following subsection.
3.2 Meta-Level Attributes
As meta-level attributes, we calculate the properties of the class probability (CDP) distributions
predicted by the base-level classiers that re
ect the certainty and condence of
the predictions.
First, maxprob(x; C) is the highest class probability (i.e. the probability of the predicted
class) predicted by the base-level classier C for example x:
Next, entropy(x; C) is the entropy of the class probability distribution predicted by
the classier C for example x:
Finally, weight(x; C) is the fraction of the training examples used by the classier C
to estimate the class distribution for example x. For decision trees, it is the weight of
the examples in the leaf node used to classify the example. For rules, it is the weight of
the examples covered by the rule(s) which has been used to classify the example. This
property has not been used for the nearest neighbor and naive Bayes classiers, as it does
not apply to them in a straightforward fashion.
The entropy and the maximum probability of a probability distribution re
ect the certainty
of the classier in the predicted class value. If the probability distribution returned
is highly spread, the maximum probability will be low and the entropy will be high, indicating
that the classier is not very certain in its prediction of the class value. On the other
hand, if the probability distribution returned is highly focused, the maximum probability is
high and the entropy low, thus indicating that the classier is certain in the predicted class
value. The weight quanties how reliable is the predicted class probability distribution.
Intuitively, the weight corresponds to the number of training examples used to estimate
the probability distribution: the higher the weight, the more reliable the estimate.
An example MDT, induced on the image domain from the UCI repository [3], is given in
Table
4. The leaf denoted by an asterisk (*) species that the C4.5 classier is to be used
to classify an example, if (1) the maximum probability in the class probability distribution
predicted by k-NN is smaller than 0.77; (2) the fraction of the examples in the leaf of the
tree used for prediction is larger than 0.4% of all the examples in the training set;
and (3) the entropy of the class distribution predicted by C4.5 is less then 0.14. In sum, if
the k-NN classier is not very condent in its prediction (1) and the C4.5 classier is very
Table
4: A meta decision tree induced on the image domain using class distribution properties
as ordinary attributes.
knn maxprob <= 0.77147:
| c45 weight <= 0.00385: KNN
| c45 weight > 0.00385:
| | c45 entropy <= 0.14144: C4.5 (*)
| | c45 entropy > 0.14144: LTREE
condent in its prediction (3 and 2), the leaf recommends using the C4.5 prediction; this
is consistent with common-sense knowledge in the domain of classier combination.
Another set of ordinary attributes used for inducing meta decision trees is the set of
original domain (base-level) attributes (BLA). In this case, the relative areas of expertise
of the base-level classiers are described in terms of the original domain attributes as in
the example MDT in Table 5.
Table
5: A meta decision tree induced on the image domain using base-level attributes as
ordinary attributes.
short-line-density-5 <= 0:
| short-line-density-2 <= 0: KNN
| short-line-density-2 > 0: LTREE
short-line-density-5 > 0:
| short-line-density-5 <= 0.111111: LTREE
| short-line-density-5 > 0.111111: C45 (*)
The leaf denoted by an asterisk (*) in Table 5 species that C4.5 should be used to
classify examples with short-line-density-5 values larger than 0.11. MDTs based on
the base-level ordinary attributes can provide new insight into the applicability of the base-level
classiers to the domain of use. However, only a human expert from the domain of
use can interpret an MDT induced using these attributes. It cannot be interpreted directly
from the point of view of classier combination.
Note here another important property of MDTs induced using the CDP set of meta-level
attributes. They are domain independent in the sense that the same language for
expressing meta decision trees is used in all domains once we x the set of base-level
classiers to be used. This means that a MDT induced on one domain can be used in
any other domain for combining the same set of base-level classiers (although it may not
perform very well). In part, this is due to the fact that the CDP set of meta-level attributes
is domain independent. It depends only on the set of base-level classiers C. However, an
ODT built from the same set of meta-level attributes is still domain dependent for two
reasons. First, it uses tests on the class values predicted by the base-level classiers (e.g.,
the tests in the root node of the ODT from Table 3b). Second, an ODT
predicts the class value itself, which is clearly domain dependent.
In sum, there are three reasons for the domain independence of MDTs: (1) the CDP
set of meta-level attributes; (2) not using class attributes in the decision (inner) nodes and
(3) predicting the base-level classier to be used instead of predicting the class value itself.
3.3 MLC4.5 - a Modication of C4.5 for Learning MDTs
In this subsection, we present MLC4.5 1 , an algorithm for learning MDTs based on Quinlan's
C4.5 [17] system for inducing ODTs. MLC4.5 takes as input a meta-level data set as
generated by the algorithm in Table 1. Note that this data set consists of ordinary and
class attributes. There are four dierences between MLC4.5 and C4.5:
1. Only ordinary attributes are used in internal nodes;
2. Assignments of the form class attribute) are made by MLC4.5
in leaf nodes, as opposed to assignments of the form is a class value);
3. The goodness-of-split for internal nodes is calculated dierently (as described below);
4. MLC4.5 does not post-prune the induced MDTs.
The rest of the MLC4.5 algorithm is identical to the original C4.5 algorithm. Below we
describe C4.5's and MLC4.5's measures for selecting attributes in internal nodes.
patch that can be used to transform the source code of C4.5 into MLC4.5 is available at
http://ai.ijs.si/bernard/mdts/
C4.5 is a greedy divide and conquer algorithm for building classication trees [17]. At
each step, the best split according to the gain (or gain ratio) criterion is chosen from the set
of all possible splits for all attributes. According to this criterion, the split is chosen that
maximizes the decrease of the impurity of the subsets obtained after the split as compared
to the impurity of the current subset of examples. The impurity criterion is based on the
entropy of the class probability distribution of the examples in the current subset S of
training examples:
denotes the relative frequency of examples in S that belong to class c i . The
gain criterion selects the split that maximizes the decrement of the info measure.
In MLC4.5, we are interested in the accuracies of each of the base-level classiers C
from C on the examples in S, i.e., the proportion of examples in S that have a class equal
to the class attribute C. The newly introduced measure, used in MLC4.5, is dened as:
where accuracy(C; S) denotes the relative frequency of examples in S that are correctly
classied by the base-level classier C. The vector of accuracies does not have probability
distribution properties (its elements do not sum to 1), so the entropy can not be calculated.
This is the reason for replacing the entropy based measure with an accuracy based one.
As in C4.5, the splitting process stops when at least one of the following two criteria
is satised: (1) the accuracy of one of the classiers on a current subset is 100% (leading
to info ML or (2) a user dened minimal number of examples is achieved in the
current subset. In each case, a leaf node is being constructed. The classier with the
maximal accuracy is being predicted by a leaf node of a MDT.
In order to compare MDTs with ODTs in a principled fashion, we also developed a
intermediate version of C4.5 (called AC4.5) that induces ODTs using the accuracy based
info A measure:
4 Experimental Methodology and Results
The main goal of the experiments we performed was to evaluate the performance of meta
decision trees, especially in comparison to other methods for combining classiers, such as
voting and stacking with ordinary decision trees, as well as other methods for constructing
ensembles of classiers, such as boosting and bagging. We also investigate the use of
dierent meta-level attributes in MDTs. We performed experiments on a collection of
twenty-one data sets from the UCI Repository of Machine Learning Databases and Domain
Theories [3]. These data sets have been widely used in other comparative studies.
In the remainder of this section, we rst describe how classication error rates were
estimated and compared. We then list all the base-level and meta-level learning algorithms
used in this study. Finally, we describe a measure of the diversity of the base-level classiers
that we use in comparing the performance of meta-level learning algorithms.
4.1 Estimating and Comparing Classication Error Rates
In all the experiments presented here, classication errors are estimated using 10-fold
stratied cross validation. Cross validation is repeated ten times using a dierent random
reordering of the examples in the data set. The same set of re-orderings have been used
for all experiments.
For pair-wise comparison of classication algorithms, we calculated the relative improvement
and the paired t-test, as described below. In order to evaluate the accuracy
improvement achieved in a given domain by using classier C 1 as compared to using clas-
sier C 2 , we calculate the relative improvement: 1 error(C 1 )=error(C 2 ). In the analysis
presented in Section 5, we compare the performance of meta decision trees induced
using CDP as ordinary meta-level attributes to other approaches: C 1 will thus refer to
combiners induced by MLC4.5 using CDP. The average relative improvement across all
domains is calculated using the geometric mean of error reduction in individual domains:
The classication errors of C 1 and C 2 averaged over the ten runs of 10-fold cross validation
are compared for each data set (error(C 1 ) and error(C 2 ) refer to these averages).
The statistical signicance of the dierence in performance is tested using the paired t-test
(exactly the same folds are used for C 1 and C 2 ) with signicance level of 95%: += to
the right of a gure in the tables with results means that the classier C 1 is signicantly
better/worse than C 2 .
Another aspect of tree induction performance is the simplicity of the induced decision
trees. In the experiments presented here, we used the size of the decision trees, measured
as the number of (internal and leaf) nodes, as a measure of simplicity: the smaller the tree,
the simpler it is.
4.2 Base-Level Algorithms
Five learning algorithms have been used in the base-level experiments: two tree-learning
algorithms C4.5 [17] and LTree [11], the rule-learning algorithm CN2 [7], the k-nearest
neighbor (k-NN) algorithm [20] and a modication of the naive Bayes algorithm [12].
All algorithms have been used with their default parameters' settings. The output of
each base-level classier for each example in the test set consist of at least two components:
the predicted class and the class probability distribution. All the base-level algorithms
used in this study calculate the class probability distribution for classied examples, but
two of them (k-NN and naive Bayes) do not calculate the weight of the examples used
for classication (see Section 3). The code of the other three of them (C4.5, CN2 and
was adapted to output the class probability distribution as well as the weight of
the examples used for classication.
The classication errors of the base-level classiers on the twenty-one data sets are presented
in Table 7 in Appendix A. The smallest overall classication error is achieved using
linear discriminant trees induced with LTree. However, on dierent data sets, dierent
base-level classiers achieve the smallest classication error.
4.3 Meta-Level Algorithms
At the meta-level, we evaluate the performances of eleven dierent algorithms for constructing
ensembles of classiers (listed below). Nine of these make use of exactly the same set of
ve base-level classiers induced by the ve algorithms from the previous section. In brief,
two perform stacking with ODTs, using the algorithms C4.5 and AC4.5 (see previous sec-
three perform stacking with MDTs using the algorithm MLC4.5 and three dierent
sets of meta-level attributes (CDP, BLA, CDP+BLA); two are voting schemes; Select-Best
chooses the best base-level classier, and SCANN performs stacking with nearest neighbor
after analyzing dependencies among the base-level classiers. In addition, boosting and
bagging of decision trees are considered, which create larger ensembles (200 trees).
uses ordinary decision trees induced with C4.5 for combining base-level classiers.
uses ODTs induced with AC4.5 for combining base-level classiers.
MDT-CDP uses meta decision trees induced with MLC4.5 on a set of class distribution
properties (CDP) as meta-level attributes.
MDT-BLA uses MDTs induced with MLC4.5 on a set of base-level attributes (BLA) as
meta-level attributes.
MDT-CDP+BLA uses MDTs induced with MLC4.5 on a union of two alternative sets
of meta-level attributes (CDP and BLA).
P-VOTE is a simple plurality vote algorithm (see Section 2.1).
CD-VOTE is a renement of the plurality vote algorithm for the case where class probability
distributions are predicted by the base-level classiers (see Section 2.1).
Select-Best selects the base-level classier that performs best on the training set (as
estimated by 10-fold stratied cross-validation). This is equivalent to building a
single leaf MDT.
SCANN [14] performs Stacking with Correspodence Analysis and Nearest Neighbours.
Correspondence analysis is used to deal with the highly correlated predictions of
the base-level classiers: SCANN transforms the original set of potentially highly
correlated meta-level attributes (i.e., predictions of the base-level classiers), into a
new (smaller) set of uncorrelated meta-level attributes. A nearest neighbor classier
is then used for classication with the new set of meta-level attributes.
Boosting of decision trees. Two hundred iterations were used for boosting. Decision trees
were induced using J48 2 , (C4.5) with default parameters' settings for pre- and post-
pruning. The WEKA [21] data mining suite implements the AdaBoost [9] boosting
method with re-weighting of the training examples.
Bagging of decision trees. Two hundred iterations (decision trees), were used for bagging,
using J4.8 with default settings.
A detailed report on the performance of the above methods can be found in Appendix A.
Their classication errors can be found in Table 9. The sizes of (ordinary and meta) decision
trees induced with dierent meta-level combining algorithms are given in Table 8. Finally,
a comparison of the classication errors of each method to those of MDT-CDP (in terms of
average relative accuracy improvement and number of signicant wins and losses) is given
in
Table
10. A summary of this detailed report is given in Table 6.
4.4 Diversity of Base-Level Classiers
Empirical studies performed in [1, 2] show that the classication error of meta-level learning
methods as well as the improvement of accuracy achieved using them is highly correlated
to the degree of diversity of the predictions of the base-level classiers. The measure of the
diversity of two classiers used in these studies is error correlation. The smaller the error
correlation, the greater the diversity of the base-level classiers.
2 The experiments with bagging and boosting have been performed using the WEKA data mining suite
which includes J48, a Java re-implementation of C4.5. The dierences between the J48 results and
the C4.5 results are negligible: an average of 0.01% with a maximum relative dierence of 4%.
correlation is dened by [1, 2] as the probability that both classiers make the
same error. This denition of error correlation is not \normalized": its maximum value
is the lower of the two classication errors. An alternative denition of error correlation,
proposed in [11], is used in this paper. Error correlation is dened as the conditional
probability that both classiers make the same error, given that one of them makes an
error:
are predictions of classiers C i and C j for a given example x and
c(x) is the true class of x. The error correlation for a set of multiple classiers C is dened
as the average of the pairwise error correlations:
relative
improvement
over
LTree
(base-level)
degree of the error correlation between base-level classifiers
australian
balance
breast-w
bridges-td
car
chess
diabetes
echocardiogram
german
glass
heart
hepatitis
hypothyroid
image
ionosphere iris
soya
tic-tac-toe
vote
waveform
wine
insignificant
significant
relative
improvement
over
k-NN
(base-level)
degree of the error correlation between base-level classifiers
australian
balance
breast-w
bridges-td
car
chess
diabetes echocardiogram
german
glass
heart
hepatitis
hypothyroid
image
ionosphere
iris
soya
tic-tac-toe
vote
waveform
wine
insignificant
significant
Figure
2: Relative accuracy improvement achieved with MDTs when compared to two
base-level classiers (LTree and k-NN) in dependence of the error correlation among the
ve base-level classiers.
The graphs in Figure 2 conrm these results for meta decision trees. The relative accuracy
improvement achieved with MDTs over LTree and k-NN (two base-level classiers
with highest overall accuracies) decreases as the error correlation of the base-level clas-
siers increases. The linear regression line interpolated through the points conrms this
trend, which shows that performance improvement achieved with MDTs is correlated to
the diversity of errors of the base-level classiers.
Table
Summary performance of the meta-level learning algorithms as compared to MDTs
induced using class distribution properties (MDT-CDP) as meta-level attributes: average
relative accuracy improvement (in %), number of signicant wins/losses and average tree
size (where applicable). Details in Tables 10 and 8 in Appendix A.
Meta-level algorithm Ave. rel. acc. imp. (in
MDT-BLA 14.47 8/0 66.94
MDT-CDP+BLA 13.91 8/0 73.73
Select-Best 7.81 5/2 1
SCANN 13.95 9/6 NA
Boosting 9.36 13/6 NA
Bagging 22.26 10/4 NA
5 Analysis of the Experimental Results
The results of the experiments are summarized in Table 6. In brief, the following main
conclusions that can be drawn from these results:
1. The properties of the class probability distributions predicted by the base-level clas-
siers (CDP) are better meta-level attributes for inducing MDTs than the base-level
attributes (BLA). Using BLA in addition to CDP worsens the performance.
2. Meta decision trees (MDTs) induced using CDP outperform ordinary decision trees
and voting for combining classiers.
3. MDTs perform slightly better than the SCANN and Select-Best methods.
4. The performance improvement achieved with MDTs is correlated to the diversity of
errors of the base-level classiers: the higher the diversity, the better the relative
performance as compared to other methods.
5. Using MDTs to combine classiers induced with dierent learning algorithms outperforms
ensemble learning methods based on bagging and boosting of decision trees.
Below we look into these claims and the experimental results in some more detail.
5.1 MDTs with Dierent Sets of Meta-Level Attributes
We analyzed the dependence of MDTs performance on the set of ordinary meta-level
attributes used to induce them. We used three sets of attributes: the properties of the
class distributions predicted by the base-level classiers (CDP), the original (base-level)
domain attributes (BLA) and their union (CDP+BLA).
The average relative improvement achieved by MDTs induced using CDP over MDTs
induced using BLA and CDP+BLA is about 14% with CDP being signicantly better in 8
domains (see Table 6 and Table 10 in Appendix A). MDTs induced using CDP are about
times smaller on average than MDTs induced using BLA and CDP+BLA (see Table 8).
These results show that the CDP set of meta-level attributes is better than the BLA set.
Furthermore, using BLA in addition to CDP decreases performance. In the remainder of
this Section, we only consider MDTs induced using CDP.
An analysis of the results for ordinary decision trees induced using CDP, BLA and
CDP+BLA (only the results for CDP are actually presented in the paper) shows that the
claim holds also for ODTs. This result is especially important because it highlights the
importance of using the class probability distributions predicted by the base-level classiers
for identifying their (relative) areas of expertise. So far, base-level attributes from the
original domain have typically been used to identify the areas of expertise of base-level
classiers.
5.2 Meta Decision Trees vs. Ordinary Decision Trees
To compare combining classiers with MDTs and ODTs, we rst look at the relative
improvement of using MLC4.5 over C-C4.5 (see Table 6, column C-C4.5 of Table 10 in
Appendix
A and left hand side of Figure 3).
performs signicantly better in 15 and signicantly worse in 2 data sets. There
is a 4% overall decrease of accuracy (this is a geometric mean), but this is entirely due to
the result in the tic-tac-toe domain, where all combining methods perform very well. If we
exclude the tic-tac-toe domain, a 7% overall relative increase is obtained. We can thus say
that MLC4.5 performs slightly better in terms of accuracy. However, the MDTs are much
smaller, the size reduction factor being over 16 (see Table 8), despite the fact that ODTs
induced with C4.5 are post-pruned and MDTs are not.
relative
improvement
over
(in
tic-tac-toe balance breast-w hypothyroid soya heart image vote hepatitis glass
echocardiogram waveform iris
german australian chess diabetes car
bridges-td wine
ionosphere
avg.
insignificant
significant
relative
improvement
over
(in
iris
bridges-td image heart soya waveform vote breast-w hepatitis german balance diabetes echocardiogram glass
ionosphere australian hypothyroid chess wine car
tic-tac-toe
avg.
insignificant
significant
Figure
3: Relative improvement of the accuracy when using MDTs induced with MLC4.5
when compared to the accuracy of ODTs induced with AC4.5 and C4.5.
To get a clearer picture of the performance dierences due to the extended expressive
power of MDT leaves (as compared to ODT leaves), we compare MLC4.5 and C-AC4.5 (see
Table
6, column C-AC4.5 in Table 10 and right hand side of Figure 3). Both MLC4.5 and
AC4.5 use the same learning algorithm. The only dierence between them is the types of
trees they induce: MLC4.5 induces meta decision trees and AC4.5 induces ordinary ones.
The comparison clearly shows that MDTs outperform ODTs for combining classiers. The
overall relative accuracy improvement is about 8% and MLC4.5 is signicantly better than
C-AC4.5 in 12 out of 21 data sets and is signicantly worse in only one (ionosphere).
Consider also the graph on the right hand side of Figure 3. MDTs perform better than
ODTs in all but two domains, with the performance gains being much larger than the
losses.
Furthermore, the MDTs are, on average, more than 34 times smaller than the ODTs
induced with AC4.5 (see Table 8). The reduction of the tree size improves the comprehensibility
of meta decision trees. For example, we were able to interpret and comment on the
MDT in
Table
4.
In sum, meta decision trees performs better than ordinary decision trees for combining
classiers: MDTs are more accurate and much more concise. The comparison of MLC4.5
and AC4.5 shows that the performance improvement is due to the extended expressive
power of MDT leaves.
5.3 Meta Decision Trees vs. Voting
Combining classiers with MDTs is signicantly better than plurality vote in 10 domains
and signicantly worse in 6. However, the signicant improvements are much higher than
the signicant drops of accuracy, giving an overall accuracy improvement of 22%. Since
performs slightly better than plurality vote, a smaller overall improvement of
20% is achieved with MDTs. MLC4.5 is signicantly better in 10 data sets and signicantly
worse in 5. These results show that MDTs outperform the voting schemes for combining
classiers (see Table 6 and Table 10 in Appendix A).
-0.4
relative
improvement
over
degree of the error correlation between base-level classifiers
australian
balance
breast-w
bridges-td
car
chess
diabetes
echocardiogram
german
glass
heart
hepatitis
hypothyroid
image
ionosphere
iris
soya
tic-tac-toe
vote
waveform
wine
insignificant
significant
-0.4
relative
improvement
over
degree of the error correlation between base-level classifiers
australian
balance
breast-w
bridges-td
car
chess
diabetes
echocardiogram
german
glass
heart hepatitis
hypothyroid
image
ionosphere
iris
soya
tic-tac-toe
vote
waveform
wine
insignificant
significant
Figure
4: Relative improvement of the accuracy of MDTs over two voting schemes in
dependence of the degree of error correlation between the base-level classiers.
We explored the dependence of accuracy improvement by MDTs over voting on the
diversity of the base-level classiers. The graphs in Figure 4 show that MDTs can make
better use of the diversity of errors of the base-level classiers than the voting schemes.
Namely, for the domains with low error correlation (and therefore higher diversity) of the
base-level classiers, the relative improvement of MDTs over the voting schemes is higher.
However, the slope of the linear regression line is smaller than the one for the improvement
over the base-level classiers. Still, the trend clearly shows that MDTs make better use of
the error diversity of the base-level predictions than voting.
5.4 Meta Decision Trees vs. Select-Best
Combining classiers with MDTs is signicantly better than Select-Best in 5 domains and
signicantly worse in 2, giving an overall accuracy improvement of almost 8% (see Table 6
and
Table
in Appendix A).
relative
improvement
over
Select-Best
degree of the error correlation between base-level classifiers
australian
balance breast-w bridges-td
car
chess
diabetes
echocardiogram
german
glass heart hepatitis
hypothyroid
image
ionosphere iris
soya
tic-tac-toe
vote
waveform
wine
insignificant
significant
Figure
5: Relative improvement of the accuracy of MDTs over Select-Best method in
dependence of the degree of error correlation between the base-level classiers.
The results of the dependence analysis of accuracy improvement by MDTs over Select-
Best on the diversity of the base-level classiers is given in Figure 5. MDTs can make
slightly better use of the diversity of errors of the base-level classiers than Select-Best.
The slope of the linear regression line is smaller than the one for the improvement over the
voting methods.
5.5 Meta Decision Trees vs. SCANN
Combining classiers with MDTs is signicantly better than SCANN in 9 domains and
signicantly worse in 6 (see Table 6 and Table 10 in Appendix A). However, the signicant
improvements are much higher than the signicant drops of accuracy, giving an overall
accuracy improvement of almost 14%. These results show that MDTs outperform the
SCANN method for combining classiers.
-0.4
relative
improvement
over
degree of the error correlation between base-level classifiers
australian
balance breast-w
bridges-td
car
chess
diabetes
echocardiogram
german
glass
heart
hepatitis
hypothyroid
image
ionosphere
iris
soya
tic-tac-toe
vote
waveform
wine
insignificant
significant
Figure
Relative improvement of the accuracy of MDTs over SCANN method in dependence
of the degree of error correlation between the base-level classiers.
We explored the dependence of accuracy improvement by MDTs over SCANN on the
diversity of the base-level classiers. The graph in Figure 6 shows that MDTs can make a
slightly better use of the diversity of errors of the base-level classiers than SCANN. The
slope of the linear regression line is smaller than the one for the improvement over the
voting methods.
5.6 Meta Decision Trees vs. Boosting and Bagging
Finally, we compare the performance of MDTs with the performance of two state of the
art ensemble learning methods: bagging and boosting of decision trees.
performs signicantly better than boosting in 13 and signicantly better than
bagging in 10 out of the 21 data sets. MLC4.5 performed signicantly worse than boosting
in 6 domains and signicantly worse than bagging in 4 domains only. The overall relative
improvement of performance is 9% over boosting and 22% over bagging (see Table 6 and
Table
in Appendix A).
It is clear that MDTs outperform bagging and boosting of decision trees. This comparison
is not fair in the sense that MDTs use base-level classiers induced by decision trees
and four other learning methods, while boosting and bagging use only decision trees as a
base-level learning algorithm. However, it does show that our approach to constructing
ensembles of classiers is competitive to existing state of the art approaches.
6 Related Work
An overview of methods for constructing ensembles of classiers can be found in [8]. Several
meta-level learning studies are closely related to our work.
Let us rst mention the study of using SCANN [14] for combining base-level classiers.
As mentioned above, SCANN performs stacking by using correspondence analysis of the
classications of the base-level classiers. The author shows that SCANN outperforms the
plurality vote scheme, also in the case when the base-level classiers are highly correlated.
SCANN does not use any class probability distribution properties of the predictions by the
base-level classiers (although the possibility of extending the method in that direction
is mentioned). Therefore, no comparison with the CD voting scheme is included in their
study. Our study shows that MDTs using CDP attributes are slightly better than SCANN
in terms of performance. Also, the concept induced by SCANN at the meta-level is not
directly interpretable and can not be used for identifying the relative areas of expertise of
the base-level classiers.
In cascading, base-level classiers are induced using the examples in a current node of
the decision tree (or each step of the divide and conquer algorithm for building decision
trees). New attributes, based on the class probability distributions predicted by the base-level
classiers, are generated and added to the set of the original attributes in the domain.
The base-level classiers used in this study are naive Bayes and Linear Discriminant. The
integration of these two base-level classiers within decision trees is much tighter than in
our combining/stacking framework. The similarity to our approach is that class probability
distributions are used.
A version of stacked generalization, using the class probability distributions predicted
by the base-level classiers, is implemented in the data mining suite WEKA [21]. However,
class probability distributions are used there directly and not through their properties, such
as maximal probability and entropy. This makes them domain dependent in the sense
discussed in Section 3. The indirect use of class probability distributions through their
properties makes MDTs domain independent.
Ordinary decision trees have already been used for combining multiple classiers in
[6]. However, the emphasis of their study is more on partitioning techniques for massive
data sets and combining multiple classiers trained on dierent subsets of massive data
sets. Our study focuses on combining multiple classiers generated on the same data set.
Therefore, the obtained results are not directly comparable to theirs.
Combining classiers by identifying their areas of expertise has already been explored
in [15] and [13]. In their studies, a description of the area of expertise in the form of an
ordinary decision tree, called arbiter, is induced for each individual base-level classier.
For a single data set, as many arbiters are needed as there are base-level classiers. When
combining multiple classiers, a voting scheme is used to combine the decisions of the
arbiters. However, a single MDT, identifying relative areas of expertise of all base-level
classiers at once is much more comprehensible. Another improvement presented in our
study is the possibility to use the certainty and condence of the base-level predictions for
identifying the classiers expertise areas and not only the original (base-level) attributes
of the data set.
The present study is also related to our previous work on the topic of meta-level learning
[18]. There we introduced an inductive logic programming [16] (ILP) framework for learning
the relation between data set characteristics and the performance of dierent (base-
level) classiers. A more expressive (non-propositional) formulation is used to represent
the meta-level examples (data set characteristics), e.g., properties of individual attributes.
The induced meta-level concepts are also non-propositional. While MDT leaves are more
expressive than ODT leaves, the language of MDTs is still much less expressive than the
language of logic programs used in ILP.
7 Conclusions and Further Work
We have presented a new technique for combining classiers based on meta decision trees
(MDTs). MDTs make the language of decision trees more suitable for combining classiers:
they select the most appropriate base-level classier for a given example. Each leaf of the
MDT represents a part of the data set, which is a relative area of expertise of the base-level
classier in that leaf. The relative areas of expertise can be identied on the basis of the
values of the original (base-level) attributes (BLA) of the data set, but also on the basis
of the properties of the class probability distributions (CDP) predicted by the base-level
classiers. The latter re
ect the certainty and condence of the class value predictions by
the individual base-level classiers.
The extensive empirical evaluation shows that MDTs induced from CDPs perform much
better and are much more concise than MDTs induced from BLAs. Due to the extended
expressiveness of MDT leaves, they also outperform ordinary decision trees (ODTs), both
in terms of accuracy and conciseness. MDTs are usually so small that they can easily be
interpreted: we regard this as a step towards a comprehensible model of combining clas-
siers by explicitly identifying their relative areas of expertise. In contrast, most existing
work uses non-symbolic learning methods (e.g., neural networks) to combine classiers [14].
MDTs can use the diversity of the base-level classiers better than voting: they out-perform
voting schemes in terms of accuracy, especially in domains with high diversity of
the errors made by the base-level classiers. MDTs also perform slightly better than the
SCANN method for combining classiers and the Select-Best method, which simply takes
the best single classier. Finally, MDTs induced from CDPs perform better than boosting
and bagging of decision trees and are thus competitive with state of the art methods for
learning ensembles.
MDTs built by using CDPs are domain independent and are, in principle, transferable
across domains once we x the set of base-level learning algorithms. This in the sense that
a MDT built on one data set can be used on any other data set (since it uses the same set
of attributes). There are several potential benets of the domain independence of MDTs.
First, machine learning experts can use MDTs for domain independent analysis of relative
areas of expertise of dierent base-level classiers, without having knowledge about the
particular domain of use. Furthermore, an MDT induced on one data set can be used for
combining classiers induced by the same set of base-level learning algorithms on other
data sets. Finally, MDTs can be induced using data sets that contain examples originating
from dierent domains.
Exploring the above options already gives us some topics for further work. Combining
data from dierent domains for learning MDTs is an especially interesting avenue for
further work that would bring together the present study with meta-level learning work on
selecting appropriate classiers for a given domain [4]. In this case, attributes describing
individual data set properties can be added to the class distribution properties in the meta-level
learning data set. Preliminary investigations along these lines have been already made
[19].
There are several other obvious directions for further work. For ordinary decision trees,
it is already known that post-pruning gives better results than pre-pruning. Preliminary
experiments show that pre-pruning degrades the classication accuracy of MDTs. Thus,
one of the priorities for further work is the development of a post-pruning method for meta
decision trees and its implementation in MLC4.5.
An interesting aspect of our work is that we use class-distribution properties for meta-level
learning. Most of the work on combining classiers only uses the predicted classes and
not the corresponding probability distributions. It would be interesting to use other learning
algorithms (neural networks, Bayesian classication and SCANN) to combine classiers
based on the probability distributions returned by them. A comparison of combining clas-
siers using class predictions only vs. with class predictions along with class probability
distributions would be also worthwhile.
The consistency of meta decision trees with common sense classiers combination
knowledge, as brie
y discussed in Section 3, opens another question for further research.
The process of inducing meta-level classiers can be biased to produce only meta-level
classiers consistent with existing knowledge. This can be achieved using strong language
bias within MLC4.5 or, probably more easily, within a framework of meta decision rules,
where rule templates could be used.
Acknowledgments
The work reported was supported in part by the Slovenian Ministry of Education, Science
and Sport and by the EU-funded project Data Mining and Decision Support for
Business Competitiveness: A European Virtual Enterprise (IST-1999-11495). We thank
Jo~ao Gama for many insightful and inspirational discussions about combining multiple
classiers. Many thanks to Marko Bohanec, Thomas Dietterich, Nada Lavrac and three
anonymous reviewers for their comments on earlier versions of the manuscript.
--R
On explaining degree of error reduction due to combining multiple decision trees.
reduction through learning multiple descriptions.
UCI repository of machine learning databases.
Analysis of Results.
Bagging Predictors.
On the Accuracy of Meta-learning for Scalable Data Mining
Rule induction with CN2: Some recent improvements.
Experiments with a New Boosting Algorithm.
Combining Classi
Discriminant trees.
A Linear-Bayes Classi er
Integrating multiple classi
Using Correspondence Analysis to Combine Classi
Exploiting Multiple Existing Models and Learning Algorithms.
Learning logical de
A study of distance-based machine learning algorithms
Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations.
Stacked Generalization.
--TR
--CTR
Michael Gamon, Sentiment classification on customer feedback data: noisy data, large feature vectors, and the role of linguistic analysis, Proceedings of the 20th international conference on Computational Linguistics, p.841-es, August 23-27, 2004, Geneva, Switzerland
Saso Deroski , Bernard enko, Is Combining Classifiers with Stacking Better than Selecting the Best One?, Machine Learning, v.54 n.3, p.255-273, March 2004
Efstathios Stamatatos , Gerhard Widmer, Automatic identification of music performers with learning ensembles, Artificial Intelligence, v.165 n.1, p.37-56, June 2005
Christophe Giraud-Carrier , Ricardo Vilalta , Pavel Brazdil, Introduction to the Special Issue on Meta-Learning, Machine Learning, v.54 n.3, p.187-193, March 2004
Joo Gama, Functional Trees, Machine Learning, v.55 n.3, p.219-250, June 2004
Pavel B. Brazdil , Carlos Soares , Joaquim Pinto Da Costa, Ranking Learning Algorithms: Using IBL and Meta-Learning on Accuracy and Time Results, Machine Learning, v.50 n.3, p.251-277, March
Nicols Garca-Pedrajas , Domingo Ortiz-Boyer, A cooperative constructive method for neural networks for pattern recognition, Pattern Recognition, v.40 n.1, p.80-98, January, 2007
S. B. Kotsiantis , I. D. Zaharakis , P. E. Pintelas, Machine learning: a review of classification and combining techniques, Artificial Intelligence Review, v.26 n.3, p.159-190, November 2006 | stacking;meta-level learning;decision trees;ensembles of classifiers;combining classifiers |
608175 | Polynomial Formal Verification of Multipliers. | Not long ago, completely automatical formal verification of multipliers was not feasible, even for small input word sizes. However, with Multiplicative Binary Moment Diagrams (*BMD), which is a new data structure for representing arithmetic functions over Boolean variables, methods were proposed by which verification of multipliers with input word sizes of up to 256 Bits is now feasible. Unfortunately, only experimental data has been provided for these verification methods until now.In this paper, we give a formal proof that logic verification with *BMDs is polynomially bounded in both, space and time, when applied to the class of Wallace-tree like multipliers. Using this knowledge online detection of design errors becomes feasible during a verification run. | Introduction
Verifying that an implementation of a combinational circuit
meets its specification is an important step in the design
process. Often this is done by applying a set of test-input
patterns to the circuit. With these patterns a simulation is
performed to ensure oneself in the correct behavior of the
circuit.
In the last few years new methods have been proposed
for a formal verification of circuits concerning the logic
function represented by the circuit, e.g. [1]. These methods
are based on Binary Decision Diagrams (BDDs). But
all these methods fail to verify some interesting combinational
circuits, such as multipliers, since all BDDs for the
multiplication function are exponential in size [5].
Recently, a new type of decision diagrams the Multiplicative
Binary Moment Diagram (*BMD) [2] has been
introduced. *BMDs are well suited for representing linear
arithmetic functions with Boolean domain. *BMDs have
been successfully used to verify the behavior of combinational
arithmetic circuits such as multipliers with input word
sizes of up to 256 bits [3].
Motivated by these results, several extensions of this
approach have been discussed [7], [6]. The verification
method of [3] is based on partitioning the circuit into components
with easy word-level specifications. First it is
shown, that the bit-level implementation of a component
Research supported by DFG grant Mo 645/2-1 and BE 1176/8-2
module implements correctly its word-level specification.
Then the composition of word-level specifications according
to the interconnection structure of the whole circuit is
derived. This composition is an algebraic expression for
which a *BMD is generated. Then the resulting *BMD
is compared with the one generated from the overall circuit
specification. A problem arising with this methodology
is the need of high-level specifications of component
modules. Taking this problem into account, Hamaguchi
et.al. have proposed another method for verifying arithmetic
circuits. They called their method verification by backward
construction [9]. This method does not need any high-level
information. In [9] only experimental data has been provided
to show the feasibility of their method for some specific
multiplier circuits.
In this paper, we give a formal proof that verification
by backward construction is polynomially bounded with respect
to the input word size. We do not consider a specific
multiplier circuit, but the class of Wallace-tree like multipli-
ers, e.g. [11], [10]. Additionally, we consider not only the
costs of intermediate results, but also the costs, that arise
during the synthesis operation on the *BMDs. This leads to
the possibility to interrupt the verification process in presence
of a design or implementation failure.
Furthermore we give classes of variable orders and we
give classes of depth-first and breadth-first orderings of the
circuit with that our tight bounds can be reached.
2. Multiplicative Binary Moment Diagrams
In this section, we give a short introduction how to represent
integer-valued functions by the proposed decision diagram
type *BMD. Details can be found in [2], [3].
*BMDs have been motivated by the necessity to represent
functions over Boolean variables having non-Boolean
ranges, i.e. functions f mapping
a Boolean vector onto integer numbers or onto rational
respectively. Here, we restrict ourselves to integer
functions.
For this class of functions the Boole-Shannon expansion
can be generalized with -
multiplication and addition re-
spectively. f - x is called the constant moment and f -
is called the linear moment of f with respect to variable
x (\Gamma denotes integer subtraction). Equation (1) is the
moment decomposition of function f with respect to variable
x.
*BMDs are rooted, directed, acyclic graphs with nonterminal
and terminal vertices. Each nonterminal vertex has
exactly two successors. The edges pointing to the successors
are named low- and high-edge and the successors themselves
are named low- and high-successor, respectively. The
nonterminal vertices are labeled with Boolean variables.
The terminal vertices have no successor and are labeled
with integer values. The low- (high-) edge of a nonterminal
node points to the constant (linear) moment of the
function represented by this node. The *BMD data structure
is reduced and ordered in the common way. Addition-
ally, a *BMD makes use of a common factor in the constant
and linear moment. It extracts this factor and places
it as a so called edge-weight on the incoming edge to the
node. This could lead to smaller representations. For ex-
ample, a *BMD with root vertex v labeled with x and
weight m on the incoming edge to v represents the function
In the following, we
denote the node v, labeled with a variable x simply as node
x, if the context is clear.
In this paper, we will consider integer edge-weights
and integer values of terminal vertices since we are interested
in integer linear functions only. To make the *BMD-
representation canonical, a further restriction on the edge-weights
is necessary. The edge-weights on the low- and
high-edge of any nonterminal vertex must have greatest
common divisor 1. Additionally, weight 0 appears only as a
terminal value, and if either outgoing edge of a node points
to this terminal node, the weight of the other outgoing edge
is 1. Note that because of reduction a terminal node with
value 0 can not appear at a high-edge.
2.1. Basic Operations on *BMDs
We now present in detail an algorithm, which is the basis
for verification by backward construction. The method of
backward construction is based on substituting variables in
a *BMD for a function f by a *BMD for another function
g. According to Equation (1), this substitution is based on
the
x .
A sketch of the substitution algorithm can be seen in Figure
1. It starts with a *BMD(f) and a *BMD(g), which is
a notation for the *BMD representing the function f and
g, respectively. Additionally, the variable x that is to be
substituted, is handed to the algorithm. If the top variable
of *BMD(f) is not the variable to be substituted, the
algorithm calls itself recursively with the low- and high-
successor. If the variable x is found, an integer multiplication
and an integer addition is performed (line 3). Note that
both operations have exponential worst case behavior [8].
In the following we show, that under specific conditions,
the worst case will not occur. The key to this is a *BMD
with a specific structure, which we call Sum Of weighted
Variables (SOV). The *BMD for the unsigned integer encoding
a is an example of a *BMD
in SOV-structure. Its outline is given in Figure 2. Such
subst( *BMD(f), x/*BMD(g) )f
1 check for terminal cases;
return
Figure
1. Substitution algorithm
Figure
2. SOV-structure
a *BMD has the properties that there is exactly one nonterminal
node for each variable and the high-edge of each non-terminal
node is labeled with weight 1 pointing directly to
a terminal node labeled with the value of the corresponding
variable. Because of these properties the substitute algorithm
simplifies considerably: Let us assume f is a *BMD
in SOV, g is a *BMD not in SOV and the variable x is at an
arbitrary position in the middle of f . In a first phase lines
and 4 are in the middle of f . In a first phase lines 2 and
4 are executed continually until x is reached. Line 3 calls
a multiplication and an addition. The multiplication returns
its result immediately since high(f) is a terminal node. The
complexity of the addition depends on the *BMDs f and g.
But again, the SOV-structure reduces the number of execution
steps, since f -
y is a terminal node for all variables y.
Later on in the paper g will be a *BMD of known size, so
we can give a more detailed analysis. Returning from the recursive
calls of line 4, line 5 does no recursive calls because
of high(f). For the same reason line 7 ends immediately
and the *BMD E4 in line 8 has a depth of 1.
For these reasons, maintaining the SOV-structure as long
as possible reduces the number of execution steps of the
substitute algorithm. Since substitution is the basic operation
in the method of verification by backward construction
the overall number of execution steps reduces considerably,
too. In addition, the SOV-structure simplifies the analysis
of the complexity of the method extremely.
3. The Class of Wallace-Tree like Multipliers
Let be two n-bit
numbers binary representation. (If n
is not a power of 2 extend n to a power of 2 and set
the superfluous inputs to zero.) The multiplication of a
and b is then equivalent to summing up n partial products
shifted by j positions to the left
for We assume in the following, that we
can divide the circuit into two parts. One part calculates the
partial product bits a i 1. The other
part adds these partial products bits to the final result using
(only) fulladder-cells. How these cells are arranged, e.g. as
a 3to2 or as a 4to2 reduction, as a binary tree or as a linear
list, has no influence on the proof.
For a more detailed description of representatives of
Wallace-tree like multipliers please see [10], [4].
4. Complexity of Backward Construction
In this section we analyze the complexity of the method
of verification by backward construction, as introduced in
[9]. In Subsection 4.1 we explain the principles of the
method. In Subsection 4.2 the backward construction of the
*BMDs for the adder part of the circuit is analyzed. In Sub-section
4.3 we take the partial product bits into account and
discuss the resulting complexity. In the last subsection we
put together these parts to get the final overall complexity.
4.1. The Method of Backward Construction
In general, the method of verification by backward construction
works as follows: First, to each primary output a
distinct variable is assigned. In the next step the *BMD for
the output word is constructed by weighting and summing
up all *BMDs of these variables according to the given output
encoding. (We assume an unsigned integer encoding of
the outputs.) Note, that we obtain the SOV-structure for the
resulting *BMD. Then, a cut is placed, crossing all primary
outputs of the circuit. The cut is moved towards the primary
inputs, such that the output lines of each gate move onto the
cut according to some reverse topological order of the gates.
While moving the cut towards the primary inputs of the cir-
cuit, the *BMD is constructed as follows: For the next line
of the circuit crossing the cut, which is also an output line
of some gate of the circuit, find its corresponding variable
in the *BMD constructed so far. Substitute this variable by
the (output) function of the gate, delete the output line from
the cut, and add all input lines of the gate to the cut. Fi-
nally, the cut has been moved to the outputs of AND gates,
computing the partial product bits. The very final step is
the substitution of corresponding variables in the obtained
*BMD by the partial product bits. After this step, we obtain
the *BMD representing the multiplication of a and b if
the circuit is correct. As mentioned above, the cut is moved
according to a reverse topological order of the gates. In
general, there exists more than one such order. As we shall
see, our proofs are not based on the knowledge of a specific
reverse topological order, as long as we substitute the variables
representing the fulladder outputs before the variables
representing the initial partial product bits. Nevertheless,
we can give one specific class of variable orders, for that
our statements hold even if we apply a different topological
order.
4.2. Constructing the *BMD for the Adder
Part
In this subsection we analyze the costs of constructing
the *BMD for the adder-part of the multiplier. The
first lemma shows, that the substitution of the sum- and
carry-output of the same fulladder in a *BMD in SOV
maintains the SOV-structure. This means, that SOV is invariant
against these substitutions. For that reason, we
have to analyze the substitution costs for a single fullad-
der (Lemma 4.2) to get the overall costs (Theorem 4.1 and
Theorem 4.2).
Lemma 4.1 Let F be a *BMD in SOV and let X denotes
the set of variable of F . Let x be two variables in
representing the sum- and carry-output of the same
fulladder. The inputs to the fulladder are represented by
. Independent of the order on the variable
sets of the *BMDs involved in the substitution process, it
holds that substituting x F by the *BMDs for the
functions
and:
1. F 0 is in SOV.
2. The terminal value of the high-successors of the nodes
for is the same as that for the high-
successor of the node for x i in F .
Proof: The proof is based on a merely functional argu-
ment. For the sum- and carry-output we get the following
functions:
\Gamma2x l xm
by expressing the boolean operations \Phi and - by integer
addition, subtraction and multiplication, i.e., x \Phi
Let the variables x in the *BMD F represent the
sum- and carry-output of the same fulladder. These variables
have weights w and 2w, respectively. The substitution
of x
yields the following function f 0
as can easily be verified. The rest of the variables in f are
not affected since x k ; x l ; xm 62 X and the weighted variables
only once in f . Therefore the *BMD
representing function f 0 , must be in SOV again, independent
of the variable order. Furthermore all three high-
edges of the nodes for x k ; x l ; xm in F 0 point to a terminal
node with value w. This completes the proof. 2
Lemma 4.1 we can be sure, that the SOV-structure
of the *BMD is maintained, if we have fulladders as basic
cells and if we substitute the variables corresponding to both
outputs of a particular fulladder directly one after another.
This is independent of the chosen variable order and the
chosen reverse topological order of the DAG representing
the adder part of the multiplier. Additionally, at each point
of the substitution process, we know exactly the size of the
x
x
Figure
3. Representation of substituting x i by
Su and x j by Ca.
x
x
Figure
4. *BMD after substitution of variable
*BMD: After having processed the next fulladder, the size
is increased by one.
We proceed now with calculating the costs by counting
the calls of the substitute algorithm.
Lemma 4.2 Let F be a *BMD in SOV with size
defined as in Lemma 4.1. Substituting
is bounded by O(jF with respect to time, independent of
the variable order or order of substitutions.
Proof: We denote the *BMDs for
by Su and Ca. We consider first the
substitution of x i by Su. The substitution process can
be visualized in Figure 3. (Note that the order between
does not matter because the sum- and carry-
function are totally symmetric. Therefore, the nonterminal
node labels are omitted.) The substitute-algorithm calls itself
recursively until it reaches the node in F labeled with
variable x i . Obviously, the number of recursive subst-calls
is bounded by O(jF j). At the node labeled with x i , the
operation F low(x has to be carried out,
the *BMDs to which the
low- and high-edge of node x i point. Since F is in SOV,
F high(x i ) is a terminal node and the call to mult ends im-
mediately. Since F low(x i ) is in SOV, too, and Su is of constant
size, the addition is bounded by O(jF j). For all other
recursive subst-calls, there will be only a constant number
of mult- and add-calls because of the SOV-structure, as
can easily be verified. We get an overall bound of O(jF
for the substitution of x i by Su. The *BMD after this
substitution can be seen in Figure 4 for a variable order
For the substitution of x j by Ca, analogous arguments
hold, except for the fact, that the SOV-structure of F is destroyed
by Su (see Figure 4). First of all, this leads to some
additional recursive subst-calls. But this will be only a constant
because Su has constant depth. Furthermore,
we get some additional calls to add and mult during some
of the recursive subst-calls, when the nodes labeled with
variables x k ; x l ; xm meet each other. But this number is
bounded by a constant value too, since Su and Ca have
constant depth.
Therefore, since the resulting *BMD is of size jF j
according to Lemma 4.1, we get a time bound of O(jF
the substitution of x i by Su and x j by Ca independent of
the variable order on X and (X n fx g.
The proof for first substituting x j by Ca and then x i by Su
is analogous. 2
Lemma 4.1 and 4.2 leads directly to the space bound for
the construction of the *BMD for the adder part.
Theorem 4.1 Constructing the *BMD for the adder part of
the multiplication circuit using substitution is bounded by
respect to space. This is independent of the
chosen reverse topological order for the fulladder-cells.
Proof: The *BMD F 0 with which we start has size O(n)
as follows since there is one nonterminal node for each variable
representing a primary output. The resulting *BMD
constructed for the adder-part has size O(n 2 One nonterminal
node for each variable representing an initial partial
product bit. The exact size depends on the chosen realization
of the multiplier. With Lemma 4.1 and 4.2 we have the
space bound O(n 2 ). 2
After analyzing the space requirements for the substitu-
tions, we now consider the time requirements.
Theorem 4.2 Constructing the *BMD for the adder-part
using substitution is bounded by O(n 4 ) with respect to time,
independent of the chosen reverse topological order for the
fulladder-cells.
Proof: For the proof we first count the number of ful-
ladder elements. Depending on the realization, the exact
number of these elements differs. Asymptotically there are
cells, forming the adder-part of any
meaningful multiplier. Therefore the number of execution
steps has an upper bound of
the size of the initial *BMD according to the proof of Theorem
4.1. This sum is figured out as follows:
with
4.3. Substitution of the Partial Products
Up to this point we analyzed the complexity of the
method of verification by backward construction for the part
of the multiplier circuit that adds the initial partial product
bits to obtain the result of the multiplication. In the sequence
we analyze the costs of the final step, substituting
the partial product bits into the *BMD. By doing so, we
Figure
5. *BMD after substituting some partial
product bits.
Figure
6. Final *BMD for the multiplication.
will destroy the SOV-structure of the *BMD. Our starting
point is the *BMD constructed up to the outputs of the AND
gates. It has size m since there are n 2 partial
product bits. In fact, it holds
with m from the proof of Theorem 4.2 and with #FA denoting
the number of fulladder cells.
We assume in the following a fixed order among the aand
b-word, assigned to the primary inputs of the circuit. In
fact, the variable order within the a i and b j is of no interest
as long as all variables forming one input word are before
the variables forming the second input word. Otherwise the
final *BMD may be larger than 2n [2].
We define a low-path of a *BMD F as the path in F from
the root to a leaf, consisting of only low-edges. Note, that
there is only one such path in a *BMD.
We now show, that the intermediate *BMDs have a structure
like that in Figure 5. (The small box on an edge denotes
the multiplicative factor.) All nodes with a terminal
high-successor in the upper diagonal line, the low-path,
are marked with a variable x k , assigned to a fulladder input
line. These lines are also output lines of AND gates.
Nodes with a nonterminal high-successor are labeled with
a variable a i (also in the low-path). The high-successors of
them are labeled with a variable b j . These are located in the
lower diagonal line. The final *BMD is structured as shown
in
Figure
6.
Theorem 4.3 Let F be the *BMD constructed for the adder
part. Substitution of the variables of F by the *BMDs for
the initial partial product bits a i \Delta b j is bounded by O(n 2 )
with respect to space and O(n 4 ) with respect to time independent
of the variable order in F and independent of the
chosen reverse topological order for the AND gates.
Proof: Let F 0 denote an intermediate *BMD generated after
the substitution of some initial partial product bits. If the
node labeled with variable x has not yet been substituted, it
must be on the low-path of F 0 . Now we consider the substitution
of node x by the *BMD for an initial partial product
bit a i Obviously, the number
of recursive calls to the substitute algorithm (Figure 1) is
bounded by O(jF 0 j). During the last of the recursive subst
calls, i.e., at node x, the following operations have to be
carried out:
The call to mult ends immediately, since x has a terminal
high-successor because of the SOV-structure of F at the beginning
of the substitution process. For the add calls we
have to distinguish two different cases.
1. No nodes have been substituted by initial partial products
bits with variable a i until now. Since node x (one non-terminal
node) has to be substituted by \LambdaB MD(a i
nonterminal nodes) the size of F 0 increases by 1 if no node
for variable b j can be shared, and it remains unchanged otherwise
If variable a i comes after the predecessor of variable x in
the variable order, \LambdaB MD(a i reaches its final position
in *BMD F 0 by calls to add during the final substitute call,
i.e.line 3 of Figure 1. The number of these calls is obviously
bounded by O(jF 0 j).
If variable a i comes before the predecessor of variable x
in the variable order, a i reaches its final position in *BMD
F 0 by calls to add during resolving previous substitute calls
(line 8 of Figure 1). The number of these add calls is con-
stant, as one can easily make sure. Furthermore, there is
one call to MakeNode and one call to mult for each of the
previous subst calls (lines 6 and 7 of Figure 1).
2. There exists already a node for variable a i , i.e., some
nodes of F have already been substituted by initial partial
product bits a i
so far. If there exists also a
node for b j , e.g. created during the substitution of an initial
partial product bit a l \Delta b j , the size of F 0 during the substitution
may decreases by 1, if the node b j can be shared, or
remains unchanged, otherwise.
The only difference to the first case is, that there are an
additional number of at most n add calls, because of the
position of b j in the variable order among the b i 1
(the worst case occurs, if b
we have a bound on the total number
of calls of O(jF
Cases 1 and 2 together give a bound on the total number
of algorithm calls of O(jF 0 j) for the substitution of node x
by \LambdaB MD(a i we have at most an increase
of n in the size of the starting BMD F (the first n initial
partial product bits of the substitution process all have different
a-variables) and the size of F is O(n 2 ), the size of
the intermediate *BMDs during substitution is bounded by
This proves the first part of the theorem. Fur-
thermore, since the number of substitutions is bounded by
and the size of the starting *BMD F is O(n 2 ) we
get an overall time bound of
This proves the second part of the theorem and we have
completed the proof. 2
4.4. Complexity of Backward Construction
With Theorema 4.1, 4.2 and 4.3, we conclude, that the
method of backward construction applied to the class of
Wallace-tree like multipliers is bounded by O(n 2 ) with respect
to space and by O(n 4 ) with respect to time. These
bounds do (largely) not depend on the chosen variable order
during the single substitution steps. Additionally, these
bounds do also not depend on the order of substitutions as
long as we first substitute all fulladder-cells and afterwards
the initial partial product bits.
If we allow one further restriction on the variable order-
ing, we attain to be totally free in the substitution ordering
across both circuit parts. This means, there is a class of
variable orders, with that we can substitute an initial partial
product bit before having processed all fulladder cells.
Nevertheless, the complexity remains polynomial. We only
have to regard the reverse topological order over the whole
multiplier circuit.
We define an (x; a; b) variable order to be any variable
order, which has at first all x-variables, at second all a-
variables and at third all b-variables The x-variables denote
the fulladder in- and outputs and the a- and b-variables denote
the variables of the input words to the multiplier. Then
we can give the following theorem:
Theorem 4.4 Given an (x; a; b) variable order, the method
of backward construction, applied to the class of Wallace-
tree like multipliers, is bounded by O(n 2 ) with respect to
space and O(n 4 ) with respect to time independent of the
reverse topological order on the circuit.
Proof: The *BMD decomposes into two parts. The upper
part consists of x-variables only, and is in SOV. (Despite the
fact, that the last high-edge points to a nonterminal node,
labeled with an a-variable.) The lower part consists of aand
b-variables. We have to consider two cases:
1. Substituting Su (Ca), we first have to find the substituted
variable. That will be found in the upper part of
the *BMD. Depending on the variable ordering within
the x k , the add operation of the substitution continues
downward the *BMD. Maximally, it reaches the
edge, pointing to a 0 (respectively any a i , which ever
is the 'smallest' a-variable so far). There, the recursive
calls terminate, since x k ! a i , for all k; i. Theo-
rema 4.1, 4.2 can be applied for the costs, considering
only the size of the upper part of the *BMD.
2. Substituting a partial product, Theorem 4.3 can be applied
One open problem is the complexity if we allow, that initial
partial product bits are substituted before all fulladder-
cells are processed, but with a different variable order as
that from Theorem 4.4, i.e., not all x-variables are at the
beginning of the variable order. We expect, that the complexity
remains still polynomial, since the structure of the
resulting *BMD is similar to the one used here.
5. Conclusions and Future Research
In this paper we analyzed the complexity of the method
of verification by backward construction applied to the class
of Wallace-tree like multipliers. We gave a formal proof of
polynomial upper bounds on run-time and space requirements
with respect to the input word sizes for that method.
Note, that until now, only experimental data has been given
to show the feasibility of the method.
A conclusion of our results is, that we can prematurely
detect design errors by watching the *BMD, if these errors
result in non correct *BMD sizes. Assume, we have processed
only gates in the adder-part of the circuit. After processing
the ith gate the *BMD has size jF If the
actual *BMD has not that size, there is an error. Assume
now, that we considered at least some initial partial product
bits of the circuit. Then, the size of the *BMD depends on
which initial partial product bits have been substituted. Not
considering an accurate size bound, the *BMD must not be
larger than jF 0 j +#FA+n or smaller than 2n, at any time.
Future research directions are to remove the restrictions,
e.g. for variable order we mentioned above, and take a look
at integer dividers, which could not be verified by backward
construction so far.
--R
HSIS: A BDD-Based Environment for Formal Verification
Verification of Arithmetic Functions with Binary Moment Diagrams.
Verification of Arithmetic Circuits with Binary Moment Diagrams.
An Easily Testable Optimal-Time VLSI- Multiplier
On the Complexity of VLSI Implementations and Graph Representations of Boolean Functions with Application to Integer Multiplication.
Hybrid Decision Diagrams - Overcoming the Limitations of MTBDDs and BMDs
Note on the Complexity of Binary Moment Diagram Representations.
Efficient Construction of Binary Moment Diagrams for Verifying Arithmetic Cir- cuits
Recursive Implementation of Optimal Time VLSI Integer Multipliers.
A suggestion for a fast multiplier.
--TR
Graph-based algorithms for Boolean function manipulation
On the Complexity of VLSI Implementations and Graph Representations of Boolean Functions with Application to Integer Multiplication
Sequential circuit verification using symbolic model checking
Symbolic Boolean manipulation with ordered binary-decision diagrams
HSIS
Verification of arithmetic circuits with binary moment diagrams
Efficient construction of binary moment diagrams for verifying arithmetic circuits
Hybrid decision diagrams
PHDD
Logic Synthesis and Verification Algorithms
K*BMDs
Verification of Arithmetic Functions with Binary Moment Diagrams | integer multipliers;formal verification;equivalence checking;backward construction;multiplicative binary moment diagrams |
608176 | A Compared Study of Two Correctness Proofs for the Standardized Algorithm of ABR Conformance. | The ABR conformance protocol is a real-time program that controls dataflow rates on ATM networks. A crucial part of this protocol is the dynamical computation of the expected rate of data cells. We present here a modelling of the corresponding program with its environment, using the notion of (parametric) timed automata. A fundamental property of the service provided by the protocol to the user is expressed in this framework and proved by two different methods. The first proof relies on inductive invariants, and was originally verified using theorem-proving assistant COQ. The second proof is based on reachability analysis, and was obtained using model-checker HYTECH. We explain and compare these two proofs in the unified framework of timed automata. | Introduction
Over the last few years, an extensive amount of research has been devoted to the
formal verification of real-time concurrent systems. Basically, formal proof methods
belong to two different fields: theorem proving and model-checking. For all
these methods, a first crucial phase is to build a formal description of the system
under study. With theorem proving, the description consists of a set of formulas,
and the verification is done using logical inference rules. With model-checking,
the description is a graph and the verification is performed using systematic
search in the graph. The theorem-proving methods apply to more general prob-
lems, but often need human interaction while model-checking methods are more
mechanical, but apply to a restricted class of systems. These methods thus appear
as complementary ones, and several authors advocated for the need to
combine them together [17, 21, 24]. This is now an exciting and ambitious trend
of research, but still a very challenging issue. We believe that a preliminary useful
step towards this objective is to evaluate comparatively the respective merits
Partially supported by Action FORMA (French Programme DSP-
STTC/CNRS/MENRT) and RNRT Project Calife
and shortcomings of such methods, not only at a general abstract level, but on
difficult practical examples. From such a comparison, one may hopefully draw
some general lessons for combining methods in the most appropriate way. Besides
the comparison may be interesting per se, as it may contribute to a better
understanding of the specific treated problem. We illustrate here the latter point
by performing a compared analysis of two correctness proofs obtained separately
for a sophisticated real-life protocol.
More precisely, we propose a comparison between two proofs of the Available
Bit Rate (ABR) conformance algorithm, a protocol designed at France Telecom
[15] in the context of network communications with Asynchronous Transfer
Mode (ATM). The first proof [19, 18] was obtained in the theorem proving frame-work
using Floyd-Hoare method of inductive invariants and the proof assistant
Coq [7]. The second proof [8] was based on the method of reachability analysis,
and used the model-checking tool HyTech [14]. In order to compare these methods
more easily, we formulate them in the unified framework of "p-automata"
[8], a variant of parametric timed automata [5]. The choice of such a framework
(motivation, differences with other models, specific problems) is discussed at the
end of the paper (section 7).
Context and motivation of the case study. ATM is a flexible packet-switching
network architecture, where several communications can be multiplexed
over a same physical link, thus providing better performances than traditional
circuit-switching networks. Several types of ATM connexions, called ATM
Transfer Capabilities (ATC), are possible at the same time, according to the
dataflow rate asked (and paid) for by the service user. Each mode may be seen
as a generic contract between the user and the network. On one side, the network
must guarantee the negociated quality of service (QoS), defined by a number of
characteristics like maximum cell loss or transfer delay. On the other side, data
packets (cells) sent by a user must conform to the negociated traffic parameters.
Among other ATCs, Deterministic Bit Rate connexions operate with a constant
rate, while Statistical Bit Rate connexions may use a high rate, but only for
In some of the most recently defined ATCs, like Available Bit Rate (ABR),
the allowed cell rate (Acr) may vary during the same session, depending on the
current congestion state of the network. Such ATCs are designed for irregular
sources, that need high cell rates from time to time, but may reduce their cell
rate when the network is busy. A servo-mechanism is then proposed in order to
let the user know whether he can send data or not. This mechanism has to be
well defined, in order to have a clear traffic contract between user and network.
The conformance of cells sent by the user is checked using an algorithm called
GCRA (generic control of cell rate algorithm). In this way, the network is protected
against user misbehaviors and keeps enough resources for delivering the
required QoS to well behaved users. In fact, a new ATC cannot be accepted (as
an international standard) without an efficient conformance control algorithm,
and some evidence that this algorithm has the intended behavior.
In this paper, we study the particular case of ABR, for which a simple "ideal"
algorithm of conformance control can be given. This algorithm is very inefficient,
in terms of memory space, and only approximation algorithms can be implemented
in practice. The correctness proof for these approximation algorithms
consists in showing that their outputs are never smaller than the outputs computed
by the ideal algorithm. More precisely, we focus on an algorithm, called
due to Christophe Rabadan at France-Telecom, now part of the I.371.1 standard
[15]. We describe this algorithm and prove its correctness with respect to
the ideal algorithm, using the two methods mentioned above.
The plan of the paper is as follows: section 2 gives an informal description of
the problem of ABR conformance control and section 3 presents an incremental
algorithm used henceforth as a specification. Section 4 describes a general modelling
framework, called "p-automata", and expresses the two different proof
methods in this context; the description of ABR algorithms as p-automata is
given in section 5. Verification with the two proof methods is done in section 6.
A discussion on constraints linked to the modelling with p-automata follows in
section 7, then a comparison between the two methods is given in section 8. We
conclude in section 9.
Overview
of ABR
update
DGCRA
resource management cell
user network
ACR
data cell
Fig. 1. conformance control
An abstract view of the ABR protocol is given in Figure 1. The conformance
control algorithm for ABR has two parts. The first one is quite simple and is not
addressed here. It consists of an algorithm called DGCRA (dynamic GCRA),
which is an adaptation of the public algorithm for checking conformance of
cells: it just checks that the rate of data cells emitted by the user is not higher
than a value which is approximately Acr, the allowed cell rate. Excess cells may
be discarded by DGCRA. In the case of ABR, the rate Acr depends on time:
its current value has to be known each time a new data cell comes from the
user. Thus, the complexity lies in the second part: the computation of Acr(t)
("update" in Figure 1), where t represents an arbitrary time in the future, at
which some data cell may arrive. The value of Acr(t) depends on successive values
carried by resource management (RM) cells transmitted from the network to the
user 1 . Each such value corresponds to some rate Acr, that should be reached as
soon as possible.
2.1 Definition of ideal rate Acr(t)
We consider the sequence of values (R carried by RM cells, ordered
by their arrival time (r i). By a slight abuse of notation, the cell
carrying R i will be called . The value Acr(t) depends only on cells R i
whose arrival time r i occur before t. Intuitively, Acr(t) should be the last value
R i received at time t, i.e. Rn with ug.
Unfortunately, because of electric propagation time and various transmission
mechanisms, the user is aware of this expected value only after a delay. Taking
the user's reaction time - observed by the control device into consideration, that
is, the overall round trip time between the control device and the user, Acr(t)
should then be Rn with may vary in turn. ITU-T considers
that a lower bound - 3 and an upper bound - 2 for - are established during the
negociation phase of each ABR connection. Hence, a cell arriving from the user
at time t on DGCRA may legitimately have been emitted using any rate R i
such that i is between last(t rate less than or equal to
any of these values, or, equivalently less than or equal to the maximum of them,
should then be allowed. Therefore, Acr(t) is taken as the maximum of these R i .
are the successive arrival times of RM
cells, such that :
(in other words,
depend on t) and if Rm ; are the corresponding rate values, then
the expected rate is
The case where obtained when no new RM cell has arrived between
A program of conformance control based on this specification would need
to compute, at each instant s, the maximum of the rate values of all the RM
Actually, RM cells are sent by the user, but only their transmission from the network
to the user is relevant here; details are available in [22]
cells received during interval which may be several hundreds on an
ATM network with large bandwidth. ITU-T committee considered that an exact
computation of Acr(t) along these lines is not feasible at reasonable cost with
current technologies.
2.2 Algorithm efficient computation of an approximation of
A more realistic algorithm, called due to C. Rabadan, has been proposed by
France-Telecom and adopted in I.371.1. It requires the storage of no more than
two RM cell rates at a time, and dynamically computes an approximation value
A of Acr(t). Two auxiliary variables, Efi and Ela, are used in the program for
storing these two RM cell rates and two dates, tfi and tla, are associated with
Efi and Ela respectively. When the current time s reaches tfi, A is updated with
value Efi, and only one RM cell rate is kept in memory (Efi:=Ela, tfi:=tla).
When a new RM cell arrives auxiliary variables Efi, Ela, tfi and tla
are updated according to several cases, depending on the position of r k w.r.t.
tfi and tla, and R k w.r.t. Efi and Ela. The full description of B 0 is given in
section 5.3 (cf. pseudocode in appendix A).
Correctness of B 0 . Before being accepted as an international standard (norm
ITU I-371.1), algorithm B 0 had to be proved correct with respect to Acr(t): it
was necessary to ensure that the flow control of data cells by comparison with A
rather than Acr(t) is never disadvantageous to the user. This means that when
some data cell arrives at time t, A is an upper approximation of Acr(t).
3 Incremental computation of the rate Acr(t)
As explained above, the correctness of algorithm B 0 mainly relies on a comparison
between the output value A of B 0 and the ideal rate Acr(t). The initial
specification of Acr(t) given in section 2.1, turns out to be inadequate for automated
(and even manual) manipulation. Therefore, it is convenient to express
the computation of Acr(t) under an algorithmic form as close as possible to B 0 ,
where, in particular, updates are performed upon reception of RM cells. Monin
and Klay were the first to formulate such an incremental computation using a
higher-order functional point of view [19]. We recall their algorithm, then adopt
a slightly different view, which is more suited to the modelling framework described
subsequently.
3.1 A higher-order functional view: algorithm F
We now consider an algorithm, called F , that stores, at current time s, an
estimation E of Acr, and updates it at each arrival of an RM cell. More precisely,
when receiving a new cell R k at time r k , algorithm F computes the new function
0 from the former function E, depending on the situation of r k with respect to
the argument t of E:
It can be shown that the value E(t) at current time s is equal to the ideal
rate Acr(t), as defined in section 2.1, for each t such that t - s
proof of this statement can be found in [19], although a much shorter one can
be deduced from the justification in section 3.2 below.
Algorithm F can thus be seen as a higher order functional program which
computes the (first-order) functions E. It might be implemented using the general
notion of "closures". However, the functions E are constant over time intervals
[a; b[, where a and b are of the form r and such that there is no value
of this form between a and b. Hence, they can be encoded using well chosen
finite lists of pairs (t; e), where t is a time and e is the rate E(t). Such lists
can be seen as schedulers for the expected rate: when current time s becomes
equal to t, the expected rate becomes e. The length of these schedulers may be
several hundreds on networks with large bandwidth (entailing a high frequency
of events R k ), even if only the relevant pairs (t; e) are kept. In contrast, algorithm
can be considered as using a scheduler of length at most two (containing pairs
(tfi,Efi), (tla,Ela)).
3.2 A parametric view: algorithm I t
Another way of looking at F is to consider t as a parameter (written t) whose
value is fixed but unknown, and represents a target observation time. Function F
becomes an "ideal algorithm" I t
, which updates a value E(t), henceforth written
, upon reception of RM cells as follows:
It is now almost immediate that the value E t
computed by I t
, is equal to the
ideal rate Acr(t), as defined in section 2.1, when t. Indeed, as s increases,
k takes the values
with
takes accordingly the values
In particular, when t, we have E t
The correctness property of B 0 with respect to Acr(t) can now be reformulated
as follows, where A is the output value of B 0 and E t the value computed by
algorithm I t
when
This property is referred to as U t and should be proved for all values of parameter
t. Henceforth, the parameter t is left implicit and we simply write U instead
of U t . Accordingly, we write E instead of E t and I instead of I t . From this point
on, algorithm I plays the r-ole of specification.
Modelling Framework and Proof Methods
It should be clear from the informal definition of the control conformance algorithm
above, that the ability to reason about real time is essential. The
expressions A and E, involved in property U , denote quantities that evolve as
time goes, and should be considered as functions of the current time s. In order
to express and prove property U , we need a formal framework. The model of
p-automata which was chosen in this paper, turns out to be sufficient for our
purposes of formal description and verification of the considered system. We describe
this model hereafter as well as two proof methods for verifying properties
in this context.
4.1 p-automata
The model of parametric timed automata, called p-automata for short, is an
extension of timed automata [4] with parameters. A minor difference with the
classical parametric model of Alur-Henzinger-Vardi [5] is that we have only one
clock variable s and several "discrete" variables w 1 ; :::; wn while, in [5], there are
several clocks and no discrete variable. One can retrieve (a close variant of) Alur-
Henzinger-Vardi parametric timed automata by changing our discrete variable w
into s\Gammaw i (see [12]). Alternatively, p-automata can be viewed as particular cases
of linear hybrid automata [2, 3, 20], and our presentation is inspired from [14].
The main elements of a p-automaton are a finite set L of locations, transitions
between these locations and a family of real-valued variables. In the figures, as
usual, locations are represented as circles and transitions as labeled arrows.
Variables and constraints. The variables of a p-automaton are: a tuple p
of parameters, a tuple w of discrete variables and a universal clock s. These
real-valued variables differ only in the way they evolve when time increases.
Parameter values are fixed by an initial constraint and never evolve later on.
Discrete variable values do not evolve either, but they may be changed through
instantaneous updates. A universal clock is a variable whose value increases
uniformly with time.
A (parametric) term is an expression of the form w+ \Sigma i ff
or \Sigma i ff are
constants in Z. With the usual convention, an empty set of indices corresponds
to a term without parameter. A convex constraint is a conjunction of (strict
or large) inequalities between terms. A p-constraint is a disjunction of convex
constraints. An update relation is a conjunction of inequalities between a variable
and a term. It is written w 0 fi term, where w 0 is a primed copy of a discrete
variable and term is a parametric term. As usual, x
is implicit if x 0 does not appear in the update relation.
Locations and transitions. With each location ' 2 L is associated a convex
constraint called location invariant. Intuitively, the automaton control may reside
in location ' only while its invariant value is true, so invariants are used to enforce
progress of the executions. When omitted, the default invariant of a location is
the constant true.
A transition in a p-automaton is of the form: h'; '; a; '; ' 0 i, where a is the
label of the transition, ' the origin location, ' 0 the target location, ' a guard and
' an update relation. The guard is a convex constraint. Guards may additionnaly
contain the special expression asap. In our model, a location ' is called urgent
if all transitions with origin ' contain asap in the guard (no time is allowed
to pass in such a location). Otherwise, it is called stable. A sequence of transitions
is called complete if it is of the form h';
are stable locations and all intermediate
locations ' i are urgent locations. With the usual convention, the case
corresponds to a single transition between stable locations.
Executions. The executions of a p-automaton are described in terms of a transition
system. A (symbolic) state q is defined by a formula (- /(s; p; w),
where - is a variable ranging over the set of locations, and / is a p-constraint.
We are primarily interested in states for which / implies the location invariant
I ' . Such states are called admissible states. A p-zone, represented by a formula
\Pi(-; s; p; w), is a finite disjunction of states. Alternatively it can be regarded
as a finite set of states. The initial state is q init
some p-constraint / init and initial location ' init . Initial location is assumed to
be stable. Since parameter values are fixed from the initial state, we often omit
the tuple p of parameters from the formulas. For a p-automaton, two kinds of
moves called action moves and delay moves are possible from an admissible state
- An action move uses a transition of the form h'; '; a; '; ' 0 i, reaching an admissible
state
equivalent to 9w /(s; w) - '(w; w 0 ). Informally, discrete variables are modified
according to update relation ' and the automaton switches to target
location ' 0 . This notion of action move generalizes in a natural way to the
notion of action move through a complete sequence of transitions. Note that
action moves are instantaneous: the value s of the clock does not evolve.
- A delay move corresponds to spending time in a location '. This is possible
if ' is stable and if the invariant I ' remains true. The resulting admissible
state is q (nothing else is changed during
this time).
A successor of a state q is a state obtained either by a delay or an action
move. For a subset Q of states, P ost (Q) is the set of iterated successors of the
states in Q. It can be easily proved that the class of p-zones is closed under
the P ost operation. As a consequence, P ost (Q) is a p-zone if Q is a p-zone
and computation of P ost terminates. The notion of predecessor is defined in a
similar way, using operator P re.
Synchronization. From two or more p-automata representing components of a
system, it is possible to build a new p-automaton by a synchronized product. Let
A 1 and A 2 be two p-automata with a common universal clock s. The synchronized
product (or parallel composition, see e.g. [14]) A 1 \Theta A 2 is a p-automaton
with s as universal clock and the union of sets of parameters (resp. discrete
variables) of A 1 and A 2 as set of parameters (resp. discrete variables). Locations
of the product are pairs of locations from A 1 and A 2 respectively. A
are stable. The p-constraints associated with
locations (invariants, initial p-constraint) are obtained by the conjunction of
the components p-constraints. The automata move independently, except when
transitions from A 1 and A 2 have a common synchronization label. In this case,
both automata perform a synchronous action move, the associated guard (resp.
update relation) being the conjunction of both guards (resp. update relations).
4.2 Proof methods
We now present two proof methods for proving a property \Pi(-; s; w) in the
framework of p-automata. This property is assumed to involve only stable loca-
tions, and to hold for all parameter valuations satisfying the initial p-constraint
of the modeled system. The first method, based on Floyd-Hoare method of as-
sertions, consists in proving that \Pi is an inductive invariant of the model (see,
e.g., [27]). The second one, based on model-checking techniques, consists in characterizing
the set of all the reachable states of the system, and checking that no
element violates \Pi.
Inductive invariants. To prove \Pi by inductive invariance, one has to prove
that \Pi holds initially, and is preserved through any move of the system: either
an action or a delay move. Formally, we have to prove:
- For any transition h'; '; a; '; ' 0 i between two stable locations ' and
A similar formula must also be proved for complete sequences of transitions.
- For any stable location ':
I ' (s; w) - I ' (s
Reachability analysis. Since P ost (q init ) represents the set of reachable states
of a p-automaton, property \Pi holds for the system if and only if P ost (q init )
is contained in the set Q \Pi of states satisfying \Pi. Equivalently, one can prove
the emptiness for the zone P ost (q init is the set of states
violating \Pi. Also note that the same property can be expressed using P re by
re (Q:\Pi
5 Description of the system
Algorithms I and B 0 will be naturally represented by p-automata. However, they
are reactive programs: they react when some external events occur (viz., upon
reception of an RM cell) or when current time s reaches some value (e.g., tfi).
Thus, in order to formally prove correctness property U , we need to model as a
third p-automaton, an appropriate environment viewed as an event generator.
Finally, in the full system obtained as a synchronized product of the three au-
tomata, we explain how to check the correctness property. All these p-automata
share a universal clock s, the value of which is the current time s. Without loss
of understanding (context will make it clear), we often use s instead of s.
5.1 A model of environment and observation
As mentioned above, the p-automaton A env modeling environment (see Figure 2)
generates external events such as receptions of RM cells. It also generates a
"snapshot" action taking place at time t. Note that for our purpose of verification
of U , it is enough to consider the snapshot as a final action of the system. The
variables involved are the parameter t (snapshot time) and a discrete variable
R representing the rate value carried by the last received RM cell. In the initial
location W ait, a loop with label newRM simulates the reception of a new RM
cell: the rate R is updated to a non deterministic positive value (written R' ? 0,
as in HyTech [14]). The snapshot action has s=t as a guard, and location W ait
is assigned invariant s - t in order to "force" the switch to location EndE.
Wait
newRM
R
snapshot
Fig. 2. Automaton Aenv modeling arrivals of RM cells and snapshot
5.2 Algorithm I
Algorithm I computes E in an incremental way as shown in the table of section
3.2. Variable E is updated at each reception of an RM cell, until current
time s becomes equal to t. More precisely, algorithm I involves variable R and
parameter t (in common with A env ) and, in addition:
- the two parameters - 3 and - 2 (representing the lower and upper bounds of the
transit time from the interface to the user and back),
- the "output" variable E (which equates with the ideal rate Acr(t) when
Initially, E and R are equal. Algorithm I reacts to each arrival of a new RM cell
with rate value R by updating E. There are three cases, according to the position
of its arrival time s with respect to t- 2 and t- 3 (see section 3.2):
1. If s - t- 2 , E is updated to the new value R:
2. If t- the new rate becomes E'=max(E,R). To avoid using
function max, this computation is split into two subcases:
3. If s ? t- 3 , the rate E is left unchanged:
Algorithm I terminates when the snapshot takes place (s=t). In the following,
we will sometimes write the updated output value E' under the ``functional''
form I(s; R; E).
Automaton AI . Algorithm I is naturally modeled as p-automaton A I (see
Figure
3). Initial location is Idle, with initial p-constraint R. The reception
of an RM cell is modeled as a transition newRM from location Idle to location
UpdE. This transition is followed by an urgent (asap) transition from UpdE back
to Idle, which updates E depending on the position of s w.r.t. t- 2 and t- 3 ,
as explained above. Without loss of understanding, transitions from UpdE to
Idle are labeled [I1], [I2a], [I2b], [I3] as the corresponding operations.
Observation of the value E corresponds to the transition snapshot from Idle to
final location EndI.
5.3 Algorithm computation of an approximation
We now give a full description of algorithm B 0 (cf. pseudo-code in appendix).
Like I, algorithm B 0 involves parameters - 3 and - 2 and variable R. However, note
that t is not a parameter for B 0 . It computes A (intended to be an approximation
of E) using five specific auxiliary variables:
- tfi and tla, which play the role of fi-rst and la-st deadline respectively,
- Efi, which is the value taken by A when current time s reaches tfi,
- Ela, which stores the rate value R carried by the last received RM cell.
- Emx, a convenient additional variable, representing the maximum of Efi, Ela.
Initially, s=tfi=tla, and the other variables are all equal. Algorithm B 0 reacts
to two types of events: "receiving an RM cell" (which is an event in common
with I), and "reaching tfi" (which is an event specific to B 0 ).
Idle
snapshot
newRM
[I3]
Fig. 3. Automaton AI
Receiving an RM cell. When, at current time s, a new RM cell with value R
arrives, the variables are updated according to the relative positions of s+- 3
and s+- 2 with respect to tfi and tla, and those of R with respect to Emx
and A. There are eight cases from [1] to [8] (with two subcases for [1]):
[3] if s ! tfi and Emx != R and tfi ?= s+- 3 and A != R then
[6] if s ! tfi and Emx ? R and R ?= Ela then
[8] if s ?= tfi and A ? R then
Reaching tfi. When the current time s becomes equal to tfi, the approximate
current rate A is updated to Efi while Efi is updated to Ela and tfi
is updated to tla (operation [9]):
When the events "reaching tfi" (s=tfi) and "receiving an RM cell" simultaneously
occur, operation [9] must be performed before operation [1], ., [8]
(accounting for the RM cell reception).
Like I, algorithm B 0 terminates at snapshot time (s=t). If the snapshot occurs
simultaneously with reaching tfi, operation [9] must be performed before
termination of B 0 .
Note that the ordering of s, tfi and tla just after operation [9] depends on
the respective positions of tfi and tla at the moment of performing [9]. In case
(s=)tfi=tla, one still has s=tfi=tla just after performing [9], then s becomes
greater than tfi=tla as time increases (until an RM cell occurs or s=t). In case
(s=)tfi!tla when performing [9], one has s!tfi=tla immediately after.
Automaton AB 0 . In order to implement the higher priority of operation [9]
over the other operations in case of simultaneous events, it is convenient to
distinguish the case where s is greater than tfi from the case where s-tfi. To
that goal, we introduce two locations Greater and Less. Operation [9] always
occurs at location Less, but the target location depends on whether tfi=tla
(subcase [9a]) or tfi!tla (subcase [9b]).
The p-automaton AB 0 is represented in Figure 4 with only the most significant
guards and no update information. Like before, the same labels are used for
automaton transitions and corresponding program operations.
Less
Greater
UpdAL
UpdAG
s=tfi!tla
snapshot
snapshot
s=tfi=tla
newRM
asap
or [8]
newRM
asap
[1] or [2] or ::: or [6]
Fig. 4. Approximation automaton AB 0
Initially AB 0 is in Greater, with p-constraint: s=tfi=tla - A=Efi=Ela=Emx=R.
Location Less has s-tfi as an invariant, in order to force execution of transition
(if tfi!tla) or [9a] (if tfi=tla) when s reaches tfi. From Less, transition
goes back to Less (since, after update, s!tfi=tla) while transition
[9a] switches to Greater (since s-tfi=tla as time increases). The reception
of an RM cell corresponds to a transition newRM . There are two cases depending
on whether the source location is Less or Greater. From Less (resp.
Greater), transition newRM goes to location UpdAL (resp. UpdAG). This transition
is followed by an urgent transition from UpdAL (resp. UpdAG) back to
Less, which updates the discrete variables according to operations [1],.,[6]
(resp. [7],[8]), as explained above. Note that transition newRM from Less
to UpdAL has an additional guard s!tfi in order to prevent an execution of
newRM before [9a] or [9b] when s=tfi (which is forbidden when "reaching
tfi" and newRM occur simultaneously).
Like before, observation is modeled as a transition snapshot from location
Less or Greater to EndB. Also note that transition snapshot from Less to
EndB has guard s!tfi in order to prevent its execution before [9a] or [9b]
when s=tfi (which is forbidden when "reaching tfi" and the snapshot occur
simultaneously).
5.4 Synchronized product and property U
The full system is obtained by the product automaton I \Theta AB 0 of
the three p-automata above, synchronized by the labels newRM and snapshot.
The action moves occur when the current time reaches tfi or t or upon reception
of an RM cell (newRM ). In this last case, return to a stable location is
obtained by a complete sequence of transitions: newRM followed by transitions
[I1],[I2a],[I2b],[I3] in A I and [1],.,[6] or [7],[8] in AB 0 .
Recall that property U expresses in terms of the ideal rate E computed by
I, and the approximate value A computed by B 0 , as: when In
our model T , snapshot action occurs as soon as s=t, and makes the automaton
switch to its final location Henceforth we
respectively for locations (W ait; Idle; Greater) and (W ait; Idle; Less).
Actually, the property A - E does not hold in all locations of T when s=t. This is
due to the necessary completion of all the actions in case of simultaneous events.
Thus, at location ' \Gamma , when s=t=tfi, one may have A ! E just before treatment
of [9]. However in location ' all the appropriate
actions are completed. Property U states therefore as follows:
Since location ' 1 is reached when s=t, and no action then occurs, an equivalent
statement of U is:
6 Verification of correctness
6.1 Verification with inductive invariants
In order to prove U E, we are going to prove that Inv j U -
is an inductive invariant of the system, where Aux i are auxiliary properties
of the system. Some of these auxiliary properties (viz., Aux 3
involve an additional variable r, which represents the reception date of the last
RM cell. (Such variables, that record some history of system execution without
affecting it, are called "history" variables [1, 27].) In our model, this can be easily
implemented by introducing a discrete variable r in the environment automa-
ton, and updating it with current time value s, whenever event newRM occurs.
. Enriched automaton A env is represented in Figure 5.
Wait
newRM
R
snapshot
Fig. 5. Enriched automaton Aenv modeling arrivals of RM cells and snapshot
More precisely, let Aux
proved by inductive invariance, i.e. by showing
that it holds initially and is preserved through any transition corresponding
to either a complete action move or a delay move. The stable locations are '
. The action moves starting from and leading to one of these locations
are those associated with the reception of an RM cell, the reaching of tfi, or
snapshot. In the case of RM cell reception, there are several subcases depending
on the complete sequence of actions in AB 0 (newRM followed by [1a], [1b],
and, subsidiarily, on the sequences in A I (newRM followed
by [I3]g). We now give in details the statements involved in the
proof. Variables of Inv are explicitly mentionned by writing Inv(-; s;
w is the vector (E, Efi, Ela, Emx, A, R, tfi, tla). Note that, provided an
encoding of locations given as integers (e.g., 0; 1; 2), all these statements
are linear arithmetic formula over reals (involving variables -; s;
can be proved just by arithmetic reasoning. This can be done automatically
with an arithmetic theorem prover. Such a proof was actually done using Coq
[18], then was reformulated in the present context, after encoding p-automata in
Coq. Let us recall that, in Coq, the user states definitions and theorems, then
he proves the latters by means of scripts made of tactics. Scripts are not proofs,
but produce proofs, which are data (terms) to be checked by the kernel of the
proof assistant. Some tactics used for ABR were described in [18]. The script
written for the ABR is about 3500 lines long and required about 4 man-months
of work. A crucial part of the human work consisted in identifying the relevant
invariants. Around two hundred subproofs were then automatically produced
and checked. The whole proof check takes 5 minutes on a PC 486 (33 MHz)
under Linux.
In order to give a flavour of the proof structure, we give now some typical
statements to be proved in the case of action and delay moves.
Action moves. As an example, consider the reception of an RM cell where the
subsequent action of AB 0 is [1b]. This corresponds to a complete sequence of
transitions from location ' \Gamma to itself. We have to prove:
E)
The conclusion conjunction of the
9
tla
For example, let us show how to prove Aux 0
9 , i.e. E
assuming tfi -
. By Aux 10 , we have E - Ela (since tla - t). By Aux 8 , we have
Ela - Efi (since tfi=tla). Hence, by transitivity: E- Efi. On the other hand,
E'=E by [I3] (since
(= tla
All the other cases
are proved similarly to this one, by case analysis, use of transitivity of -; !; =,
and regularity of + over these relations.
Delay moves. They take place at stable locations ' . The corresponding
properties to be proved are respectively:
These formula easily reduce to:
The first (resp. second, third) implication is true because its conclusion A - E
follows from the hypothesis using the Aux 1 -conjunct (resp. Aux 2 -conjunct, U -
conjunct) of Inv.
6.2 Verification by reachability analysis
In order to mechanically prove property U , we have to compute P ost for the
product automaton T , starting from its initial state
where / init is the p-constraint s=tfi=tla - R=E=A=Efi=Ela=Emx- 0!- 3 !- 2 .
We then have to check that P ost (q init ) does not contain any state where the
property U is violated. Recall that property U can be stated as:
The state where U does not hold is then
Automata A env , A I and AB 0 can be directly implemented into HyTech [14],
which automatically computes the synchronized product T . The modelling of
the protocol and property as p-automata and the encoding in Hytech required
about 3 man-months of work. The forward computation of P ost (q init ) requires
iteration steps and its intersection with q :U is checked to be empty. (This
takes 8 minutes on a SUN station ULTRA-1 with 64 Megabytes of RAM memory.
Appendix
B for a display of the generated p-zone at ' 1 . ) This achieves an
automated proof of correctness of B 0 . Such a proof first appears along these lines
in [8]. Note that HyTech can provide as well a proof by backward reasoning
(using P re instead of P ost ).
7 Discussion on p-automata
Tools based on timed automata have been successfully used in the recent past for
verifying or correcting real-life protocols (e.g., the Philips Audio Control protocol
[16] and the Bang & Olufsen Audio/Video protocol [13]). Experiences with such
tools are very promising. This observation led us to use here p-automata, a close
variant of timed automata. The differences between p-automata and classical
timed automata are two-fold. A first minor difference is that p-automata use a
form of "updatable" time variables instead of traditional clocks (but see [9] for
a proof of equivalence between the two classes). Second p-automata incorporate
parameters, which are essential in our case study. The choice of Hytech, as an
associated tool, is natural in this context. We now explain some new features
that appear in the proof of the protocol using p-automata, and some difficulties
encountered in the process of building the specification.
7.1 Towards p-automata
We first point out some significant differences between the proof by invariants as
stated by Monin and Klay [19, 18] and the corresponding proof presented here
(see section 6.1).
Representation of time. In the work reported in [19, 18], the formalization of
time aspects, though influenced by timed automata, was performed using ad hoc
devices, appealing to the reader's natural understanding. This problem is settled
here, thanks to p-automata, which include a built-in notion of clock and rely on
a well-understood and widely accepted notion of time. The use of p-automata,
although it introduces an additional level of encoding, thus makes the effort
of specification easier with this respect. Note however that the encoding of p-
automata in Coq does not specify a priori the granularity of time evolution, and
allows for either a continuous or discrete underlying model of time. It is based on
a minimal underlying theory of arithmetic (mainly the transitivity of relations
!; - and the regularity of + over them). This is not the case in the theory of
timed automata and the associated tools like Hytech (or others [6, 11]) where
the time domain is assumed to be continuous (Q or R), and a sophisticated
package for manipulating linear constraints over real arithmetic is used. We will
come back to this difference (see section 8.2).
Higher-order vs. first-order specifications. As recalled in section 3, Monin
and Klay [19] introduced an incremental way of computing the ideal rate Acr(t)
(higher-order algorithm F). We then recast their algorithm F under a parametric
form I. The latter view is probably less natural, but fits better into the first order
framework of p-automata.
Reformulating the proof by invariance. We also rewrote the proof by invariance
of [19] in the context of p-automata, in order to assess the proofs in
a uniform framework. Expressing the auxiliary properties needed in this proof
required to (re)introduce history variable r accounting for the reception time
of the last RM cell. Moreover, these properties only concerned stable locations
of the system: expressions of the form had to be included, and only
complete sequences of actions were considered. Note that the auxiliary property
is different from its counterpart Aux 0
as found in [19]. Actually, Aux 0
8 is false in our model.
We will subsequently explain this discrepancy (see section 8.2).
7.2 Specific modelling problems with p-automata
In the process of constructing p-automata while keeping with the specification,
we had to face some problems, which are listed below.
Modelling the environment as a p-automaton. In addition to the automata
corresponding naturally to I and B 0 , a third automaton was introduced
to model the environment, thus providing a clear separation between external
and internal events.
Introducing urgent locations. The class of update relations in p-automata
(derived from HyTech) does not allow for simultaneous updates. For instance,
choosing (at random) a new identical value of R and E (with an instruction like
E'=R'?0) is forbidden. In order to implement such an update relation, an urgent
intermediate location, such as UpdE depicted in figure 6, had to be introduced.
Idle
asap
update
R
newRM
instead of
Idle
newRM
Fig. 6. Reception and update with two locations instead of one
Introducing two stable locations. In order to implement the higher priority
of operation "reaching tfi" (when occuring simultaneously with other actions of
the system), we were led to create two stable locations (Greater and Less) in the
p-automaton representing algorithm B 0 . Note that we overlooked this priority
requirement in a preliminary implementation embedding only one stable loca-
tion, which entailed a violation of property U . (This was detected subsequently
by running HyTech.)
8 Proof Comparison
We now assess the respective merits and shortcomings of the proof methods by
invariance and reachability analysis within the unified framework of p-automata,
regarding the ABR conformance problem. We also explain how to cross-fertilize
the results of the two methods.
8.1 Automated proof vs. readable proof
It is well-known that a proof in a model-checker is more automatic, but that more
insight in the algorithm is gained by doing the proof with a theorem-prover. Let
us confirm this general opinion in our particular case study.
As already noticed, the reachability proof was done in a fully automatic manner
(via HyTech). This is an outstanding advantage over the proof by inductive
invariance (which required the human discovery of several nontrivial auxiliary
properties) and justifies a posteriori our effort of translating the problem into the
formalism of p-automata. In particular, it becomes easier to validate other ABR
conformance protocols as soon as they are formalized themselves in terms of
p-automata. This is actually what was done recently in the framework of RNRT
project Calife: different variants of B 0 were easily checked (or invalidated) with
HyTech by reachability analysis along the lines described above. It was not possible
to do the same with the inductive invariance approach because several of
the original auxiliary properties became false while others had to be discovered.
Nevertheless, several qualifications must be done about this positive side of
model-checking. Let us first stress that the proof obtained by reachability analy-
sis, merely consists of a long list of constraints (see appendix B) that represents
the whole set of reachable symbolic states. This information is hardly exploitable
by a human: in particular the essential fact that such a list is complete (i.e. "cov-
ers" all the reachable states) is impossible to grasp by hand. In contrast, the
invariance proofs as checked by a theorem prover are more human-oriented. It is
instructive to inspect the case analysis that was automatically performed, and
allows the reader to be convinced of the property accurateness (or a contrario
of some flaws). Besides, the auxiliary properties are very important per se, and
bring important information about algorithm B 0 itself. (Some of these properties
are indeed part of norm ITU I.371.1, and must be henceforth fulfilled by any
new ABR algorithm candidate to normalization.)
We explain now how one can go beyond the limitations of each method, by
using both of them in a fruitful cross-fertilizing way.
8.2 Cross-fertilizing proofs
Checking the output produced by Hytech. A proof produced by Hytech,
i.e. the (finite) list P ost of all the symbolic states reached from the initial one,
can be seen as a fixed-point associated with the set of transitions of the product
automaton. Therefore, one can verify that such a list is "complete" (covers all
the reachable states) by checking that it is invariant through action and delay
moves. This can be done, using the Coq system, exactly as explained in section
6.1. This gives of course an increased confidence in the model-checking proof. In
addition it may give new insight about the conditions on the environment that
were assumed to perform the proof: as noticed in section 7.1, in Coq, we use
a very flexible model for time, assuming only that time increases (but nothing
about its continuity). In fact, the correctness of algorithm B 0 holds even for a
discrete time modelling. Such a feature cannot not be derived from the proof of
Hytech, since it uses a priori an assumption of continuous time evolution.
Checking the invariants used in Coq. In the other way around, one can
check the correctness of the auxiliary properties
simply by asking Hytech if all the reachable states satisfy them (i.e. P ost '
10). The answer is always "yes", which gives us another
proof of the Aux i s. Recall however that Aux 8 differs from its counterpart Aux 0in [19]:
8 is false in the p-automata model
and true in the original model of Monin-Klay (see section 7.1). This discrepancy
originates from the different ways two consecutive RM cells follow each other
in the two models. In our p-automata model, two consecutive RM cells may
arrive simultaneously while this is precluded in the model of Monin-Klay as
reception times of RM cells must form a strictly increasing sequence. The model
presented here is then more general than the original model of Monin-Klay, as it
relaxes some assumption concerning the sequence of RM cells. As a by-product,
this provides us with a better understanding of the conditions under which B 0
behaves correctly.
Finally note that the model of p-automata is flexible enough to incorporate
the assumption of strictly increasing sequences of RM cells, as used in [19]: it
suffices to use explicitly the additional variable r mentioned in section 7.1 (date
of last RM cell reception), and add guard s ? r to the newRM transition in the
environment automaton of figure 5. With such a modification, property Aux 0also becomes true in the model with p-automata.
8.3 Further experiments and foreseen limits
We thus claim that checking properties proved by one tool, using the other one,
is very fruitful. As examplified above, it may reveal possible discrepancies, which
lead in turn to discover implicit modelling assumptions. It may also of course
detect real flaws, which originate from the protocol or its modelling (although it
has not been the case here). In any case this proof confrontation helps the verification
work, and increases the confidence of the human in mechanical proofs.
One can now wonder how general are the remarks we made on this case study,
given the fact that we focused on one problem (the correctness of algorithm
used two specific tools as a theorem-prover (Coq) and a model-checker
(Hytech).
Regarding the tools, we believe that our experience with Coq and Hytech
is not specific, but can be reproduced with equivalent tools as well. We have
concrete indications in this sense. Actually, in the framework of project Calife,
Pierre Cast'eran and Davy Rouillard, from University of Bordeaux, have performed
a proof similar to the proof in Coq, using the model of p-automata
and theorem prover Isabelle [10, 25]. Concerning Hytech, we do not know
any other model-checking tool allowing for parameters but, as mentioned in appendix
C, we did some successful experiments with Gap [12], a tool based on
constraint logic programming, which works as a fixed-point engine very much as
Hytech generates P ost .
Concerning the studied problem, the success of the proof by model-checking
comes from the fact that the computation of the P ost computation with HyTech
had terminated. This can be considered a "lucky" event, since analysis of such
a parametric algorithm is known to be undecidable [5]. This means that computation
of P ost does not always terminate for all p-automata. (This observation
leads us to propose, in Appendix C, an "approximate" version of B 0 , belonging
to a subclass for which P ost is guaranteed to terminate.) Is such a termination
property preserved when considering ABR conformance algorithms other
than The answer is ambivalent. On the one hand, as already mentioned,
our model-checking experience on B 0 was successfully reused on other (relatively
close) algorithms of ABR conformance in the framework of project Calife. On
the other hand, we failed to mechanically check an algorithm of ABR conformance
of a different kind: the generic algorithm of Rabadan-Klay (see e.g. [26]).
This algorithm involves an unbounded list of N scheduled dates (instead of 2,
as in B 0 ), and cannot be modeled with p-automata due to the use of a list data
type. Even for the restricted version where N is bound to a small value, e.g:
3, in which case we get a natural model with p-automata, Hytech runs out of
memory and fails to generate P ost .
The latter experiment recalls us some inherent limits of model-checking: if
the algorithm uses not only a finite set of numeric variables, but also unbounded
data structures (such as lists), then the verification process has to rely essentially
on classical methods of theorem-proving; this is also true when the program can
be modeled as a p-automaton, but the space of reachable symbolic states is too
big to be computed by existing model-checkers.
9 Conclusion
As a recapitulation, we believe that many useful informations about real-time
programs can be obtained without resorting to new integrated tools, when it
is possible to make a joint use of well-established theorem prover and model
checker. In our case, we gained much insight about the algorithm B 0 , and important
confidence in the proofs of correctness produced by Coq and Hytech,
basically by using the unified framework of p-automata, and cross-fertilizing the
two proofs. In particular we saw that algorithm B 0 is robust in the sense that
several underlying assumptions can be relaxed: the nature of time can be discrete
(instead of continuous); the (measured) time interval between two received RM
cells can be null. Moreover, the basic p-automata model underlying B 0 was successfully
reused for proving the correctness of some variants. To our knowledge,
it is the first time that such a compared study between theorem proving and
model checking has been performed on the same industrial problem. We hope
that this work paves the way for further experiences on real-life examples. In
the framework of project Calife, we are currently developing a two-step methodology
for verifying the quality of new services provided by telecommunication
networks, which exploits the synergy between the two proof methods: the first
step, based on model-checking, yields a p-automaton model endowed with a collection
of invariants it satisfies; in the second step, the p-automaton is recasted
under an algorithmic form better suited to the end-user, and verification is done
via a generic proof assistant, with the help of invariants.
--R
"The existence of refinement mappings"
"The Algorithmic Analysis of Hybrid Systems"
"Hybrid Automata: An Algorithmic Approach to the Specification and Verification of Hybrid Systems"
"Automata for Modeling Real-Time Systems"
"Parametric real-time reasoning"
"UPPAAL - a Tool Suite for Automatic Verification of Real-Time Systems"
"Are Timed Automata Updat- able?"
"The Tool KRONOS"
"A Closed-Form Evaluation for Extended Timed Automata"
"A User Guide to HYTECH"
"Traffic control and congestion control in B- ISDN"
"Model-Checking for Real-Time Systems"
"Beyond model checking"
"Proving a real time algorithm for ATM in Coq"
Correctness Proof of the Standardized Algorithm for ABR Conformance.
"An Approach to the Description and Analysis of Hybrid Systems"
"A Platform for Combining Deductive with Algorithmic Verification"
L'ABR et sa conformit'e.
"A Closed-Form Evaluation for Datalog Queries with Integer (Gap)- Order Constraints"
"An Integration of Model Checking with Automated Proof Checking"
"Mechanical Verification of a Generic Incremental ABR Conformance Algorithm"
"An Introduction to Assertional Reasoning for Concurrent Sys- tems."
--TR
The existence of refinement mappings
An introduction to assertional reasoning for concurrent systems
A closed-form evaluation for Datalog queries with integer (gap)-order constraints
Parametric real-time reasoning
The algorithmic analysis of hybrid systems
The tool KRONOS
UPPAALMYAMPERSANDmdash;a tool suite for automatic verification of real-time systems
Automata For Modeling Real-Time Systems
A User Guide to HyTech
Proving a Real Time Algorithm for ATM in Coq
Hybrid Automata
Correctness Proof of the Standardized Algorithm for ABR Conformance
Automated Verification of a Parametric Real-Time Program
Mechanical Verification of an Ideal Incremental ABR Conformance
Are Timed Automata Updatable?
Beyond Model Checking
An Integration of Model Checking with Automated Proof Checking
A Platform for Combining Deductive with Algorithmic Verification
Model-Checking for Real-Time Systems
An Approach to the Description and Analysis of Hybrid Systems
Formal modeling and analysis of an audio/video protocol
--CTR
Patricia Bouyer , Catherine Dufourd , Emmanuel Fleury , Antoine Petit, Updatable timed automata, Theoretical Computer Science, v.321 n.2-3, p.291-345, August 2004 | telecommunication protocols;theorem proving;model checking |
608242 | Controllability of Right-Invariant Systems on Solvable Lie Groups. | We study controllability of right-invariant control systems groups. Necessary and sufficient controllability conditions for Lie groups not coinciding with their derived subgroup are obtained in terms of the root decomposition corresponding to the adjoint operator ad B. As an application, right-invariant systems on metabelian groups and matrix groups, and bilinear systems are considered. | Introduction
Control systems with a Lie group as a state space are studied in the
mathematical control theory since the early 1970-ies.
R.W. Brockett [1] considered applied problems leading to control systems
on matrix groups and their homogeneous spaces; e.g., a model of DC to DC
conversion and the rigid body control raise control problems on the group of
rotations of the three-space SO(3) and on the group SO(3) \Theta R 3 respectively.
The natural framework for such problems are matrix control systems of the
where x(t) and A, are n \Theta n matrices. There was established
the basic rank controllability test for homogeneous systems:
such systems are controllable iff the Lie algebra generated by the matrices
has the full dimension. This test was specified for the groups of
matrices with positive determinant GL+ (n; R), the group of matrices with
1991 Mathematics Subject Classification. 93B05, 17B20.
Key words and phrases. Controllability, right-invariant systems, bilinear systems, Lie
groups.
This work was partially supported by the Russian Foundation for Fundamental Re-
search, Projects No. 96-01-00805 and No. 97-1-1a/22.
The author is a recipient of the Russian State Scientific Stipend for 1997.1079-2724/97/1000-0531$09.50/0 c
Publishing Corporation
532 YU. L. SACHKOV
determinant one SL(n; R), the group of symplectic matrices Sp(n), and the
group of orthogonal matrices with determinant one SO(n). Some controllability
conditions for nonhomogeneous matrix systems were also obtained.
The first systematic mathematical study of control systems on Lie groups
was fulfilled by V. Jurdjevic and H. J. Sussmann [2]. They noticed that the
passage from the matrix system (1) to the more general right-invariant
system
G; u(t) 2 R; (2)
are right-invariant vectorfields on a Lie group G, "in
no essential way affects the nature of the problem." The basic properties
of the attainable set (the semi-group property, path-connectedness, relation
with the associated Lie subalgebras determined by the vectorfields A,
established. The rank controllability test was proved for
system (2) in the homogeneous case and in the case of a compact group G.
Sufficient controllability conditions for other cases were also given.
V. Jurdjevic and I. Kupka [4] introduced a systematic tool for studying
controllability on Lie groups. For the control system (2) presented in the
form of the polysystem
ae L (3)
(where L is the Lie algebra of the group G) they considered its Lie saturation
LS(\Gamma) - the largest system equivalent to \Gamma. Controllability of the system \Gamma
on G is equivalent to L, and a general technique for verification of
this equality was proposed. (This technique is outlined in Subsec. 4.2 and
used in Subsecs. 4.3, 4.4 below.) In [4] sufficient controllability conditions
for the single-input systems were obtained for simple and
semi-simple groups G with the use of this technique. They were given
in terms of the root decomposition of the algebra L corresponding to the
adjoint operator ad B.
In their preceding paper V. Jurdjevic and I. Kupka [3] presented the
enlargement technique for systems on matrix groups G ae GL(n; R) and
obtained sufficient controllability conditions for
GL+ (n; R).
These results for SL(n; R) and GL+ (n; R) were generalized by J.P. Gauthier
and G. Bornard [5].
B. Bonnard, V. Jurdjevic, I. Kupka, and G. Sallet [6] obtained a characterization
of controllability on a Lie group which is a semidirect product
of a vector space and a compact group which acts linearly on the vector
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 533
space. The case
n\Omega s SO(n) was applied to the study of Serret-Frenet
moving frames.
The results of [4] for simple and semi-simple Lie groups were generalized
in a series of papers by J.P. Gauthier, I. Kupka, and G. Sallet [7],
R.El Assoudi and J. P. Gauthier [9], [10], F. Silva Leite and P.E. Crouch [8]:
analogous controllability conditions were obtained for classical Lie groups
with the use of the Lie saturation technique and the known structure of real
simple and semi-simple Lie algebras.
In contrast to this "simple" progress, invariant systems on solvable groups
seem not to be studied in the geometric control theory at all until 1993.
Then a complete solution of the controllability problem for simply connected
nilpotent groups G was given by V. Ayala Bravo and L. San Martin [11].
Some results on controllability of (not right-invariant) systems on Lie groups
analogous to linear systems on R n were obtained by V. Ayala Bravo and
J. Tirao [12].
Several results on controllability of right-invariant systems were obtained
within the framework of the Lie semigroups theory [13], [14]: for nilpotent
groups by J. Hilgert, K. H. Hofmann, and J. D. Lawson [15], for reductive
groups by J. Hilgert [16]. For Lie groups G with cocompact radical,
J. D. Lawson [17] proved that controllability of a system \Gamma ae L follows from
nonexistence of a half-space in L bounded by a Lie subalgebra and containing
\Gamma; if G is additionally simply connected, this condition is also necessary
for controllability. This result generalizes controllability conditions for compact
groups [2], nilpotent groups [15], and for semidirect products of vector
groups and compact groups [6].
In [18] the author characterized controllability of hypersurface right-invariant
systems, i.e., of systems \Gamma of the form (3) with the codimension one Lie
subalgebra generated by the vectorfields This gave a necessary
controllability condition for simply connected groups - the hypersurface
principle, see its formulation for single-input systems \Gamma in Proposition 2
below. In its turn, the hypersurface principle was applied and there was
obtained a controllability test for simply connected solvable Lie groups G
with Lie algebra L satisfying the additional condition: for all X 2 L the
adjoint operator ad X has real spectrum.
The aim of this paper is to give convenient controllability conditions of
single-input systems \Gamma for a wide class of Lie groups including solvable ones;
more precisely, for Lie groups not coinciding with their derived subgroups.
The structure of this paper is as follows.
We state the problem and introduce the notation in Sec. 2.
In Sec. 3 we give the necessary controllability condition for simply connected
groups G not coinciding with their derived subgroup G (1) (Theorem 1
and Corollary 1). These propositions are proved in Subsec. 3.3 after the
preparatory work in Subsec. 3.2. The main tools are the rank controllability
534 YU. L. SACHKOV
condition (Proposition 1) and the hypersurface principle (Proposition 2).
Sec. 4 is devoted to sufficient controllability conditions for the groups
G 6= G (1) . We present the main sufficient results in Subsec. 4.1. Then
we recall the Lie saturation technique in Subsec. 4.2 and prove preliminary
lemmas in Subsec. 4.3. The main results (Theorem 2 and Corollaries 2,
are proved in Subsec. 4.4.
In Sec. 5 we consider several applications of our results. Controllability
conditions for metabelian groups are obtained in Subsec. 5.1. Then controllability
conditions for some subgroup of the group of motions of the Euclidean
space are studied in detail (Subsec. 5.2) and are applied to bilinear
systems (Subsec. 5.3). Finally, the clear small-dimensional version of this
theory for the group of motions of the two-dimensional plane is presented
in Subsec 5.4.
A preliminary version of the below results was stated in [19].
2. Problem statement and definitions
Let G be a connected Lie group, L its Lie algebra (i.e., the Lie algebra
of right-invariant vector fields on G), and A, B any elements of L. The
single-input affine right-invariant control system on G is a subset of L of
the form
The attainable set A of the system \Gamma is the subsemigroup of G generated
by the set of the one-parameter semigroups
The system \Gamma is called controllable if G.
To see the relation of these notions with the standard system-theoretical
ones, let us write the right-invariant vector fields A and B as A(x) and
G. Then the system \Gamma can be written in the customary form
G:
The attainable set A is then the set of points of the state space G reachable
from the identity element of the group G for any nonnegative time. The
system \Gamma is controllable iff any point of G can be reached along trajectories
of this system from the identity element of the group G. By right-invariance
of the fields A(x), B(x), the identity element in the previous sentence can
be changed by an arbitrary one.
Our aim is to characterize controllability of the system \Gamma in terms of the
Lie group G and the right-invariant vector fields A and B.
Now we introduce the notation we will use in the sequel.
For any subset l ae L we denote by Lie (l) the Lie subalgebra of L generated
by l. Closure of a set M is denoted by cl M . The signs \Phi and
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 535
direct sums of vector spaces; \Phi s
and\Omega s stand for semidirect
products of Lie algebras and Lie groups correspondingly.
We denote by Id the identity operator or the identity matrix of appropriate
dimension,
sin fft cos fft
ff r
for t, ff, r 2 R. The square matrix with all zero entries except one unit in
the ith raw and the jth column is denoted by E ij .
Now we introduce the notation connected with eigenvalues and eigenspaces
of the adjoint operator ad B in L:
ffl the derived subalgebra and the second derived subalgebra:
ffl the complexifications of L and L (i) , 2:
(i)\Omega C
(the tensor products over R),
ffl the adjoint representations and operators:
ffl spectra of the operators ad Bj L (i) , 2:
\Phi a 2 C j Ker(ad c Bj L (i)
c
ffl real and complex eigenvalues of the operators ad Bj L (i) , 2:
ffl complex eigenspaces of ad c Bj L (1)
c
c
ffl real eigenspaces of ad Bj
ffl complex root subspaces of ad c Bj L (i)
c
2:
c
a
ffl real root subspaces of ad Bj L (i)
2:
c (a)
536 YU. L. SACHKOV
ffl real components of L (i) , 2:
Note that the subalgebras L (1) and L (2) are ideals of L, so they are (ad B)-
invariant, and the restrictions ad Bj L (1) and ad Bj L (2) are well defined.
In the following lemma we collect several simple statements about decomposition
of the subalgebras L (1) and L (2) into sums of root spaces and
eigenspaces of the adjoint operator ad B.
Lemma 2.1.
(2)
r ,
(3) L (2) (a) ae L (1) (a) for any a 2 Sp (2) ,
r ae L (1)
r ,
Proof. Is obtained by the standard linear-algebraic arguments. In item (5)
Jacobi's identity is additionally used.
Consider the quotient operator
defined as follows:
Analogously for a 2 Sp (1) we define the quotient operator in the quotient
root space:
and its complexification:
ad
c (a)=L (2)
c (a)=L (2)
c (a);
(ad
c (a) 8X 2 L (1)
c (a):
Definition 1. Let a 2 Sp (1) . We denote by j (a) the geometric multiplicity
of the eigenvalue a of the operator -
ad c B(a) in the vector space
c (a)=L (2)
c (a).
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 537
Remarks.
(a) For a 2 Sp (1) the number j (a) is equal to the number of Jordan
blocks of the operator
ad B(a) in the space L (1) (a)=L (2) (a).
(b) If an eigenvalue a 2 Sp (1) is simple, then j
Suppose that assumption will be justified by Theorem
below). Then by Lemma 2.1
that is why any element X 2 L can uniquely be decomposed as follows:
We will consider such decomposition for the uncontrolled vector field A of
the system \Gamma:
We denote by ]
A(a) the canonical projection of the vector A(a) 2 L (1) (a)
onto the quotient space L (1) (a)=L (2) (a).
Definition 2. Let We say that
a vector A has the zero a-top if
ad B(a) \Gamma a Id)(L (1) (a)=L (2) (a)):
In the opposite case we say that A has a nonzero a-top. We use the corresponding
notations: top
Remark . Geometrically, if a vector A has a nonzero a-top, then the vector
A(a) has a nonzero component corresponding to the highest adjoined vector
in the (single) Jordan chain of the operator
ad B(a). Due to nonuniqueness
of the Jordan base, this component is nonuniquely determined, but its property
to be zero is basis-independent.
Definition 3. A pair of complex numbers (ff; fi), Re ff - Re fi, is called
an N-pair of eigenvalues of the operator ad B if the following conditions
hold:
(2) L (2) (ff) 6ae
Re a; Re
(3) L (2) (fi) 6ae
Re a; Re
\Psi .
538 YU. L. SACHKOV
Remarks.
(a) In other words, to generate the both root spaces L (2) (ff) and L (2) (fi)
for an N-pair (ff; fi), we need at least one root space L (1) (fl) with
Re fi]. The name is explained by the fact that N-pairs can
NOT be overcome by the extension process described in Lemma 4.2:
they are the strongest obstacle to controllability under the necessary
conditions of Theorem 1.
(b) The property of absence of the real N-pairs will be used to formulate
sufficient controllability conditions in Theorem 2. In some generic
cases this property can be verified by Lemma 4.3.
3. Necessary controllability conditions
x 3.1. Main theorem and known results. It turns out that controllability
on simply connected Lie groups G with G 6= G (1) is a very strong
property: it imposes many restrictions both on the group G and on the
system \Gamma.
Theorem 1. Let a Lie group G be simply connected and its Lie algebra
L satisfy the condition L 6= L (1) . If a system \Gamma is controllable, then:
(1) dimL
(3) L (2)
r ,
r ,
r ae
The notations j (a) and top (A; a) used in Theorem 1 are explained in
Definitions 1 and 2 in Sec. 2.
Remarks.
(a) The first condition is a characterization of the state space G but
not of the system \Gamma. It means that no single-input system
can be controllable on a simply connected Lie group G
with dimG (1) ! dimG \Gamma 1. That is, to control on such a group,
one has to increase the number of inputs. There is a general lower
for the number of the controlled
vectorfields necessary for controllability of the multi-input
system (3) on a simply connected group G [18].
(b) Conditions (3)-(7) are nontrivial only for Lie algebras L with L (2) 6=
L (1) (in particular, for solvable noncommutative L). If L
then these conditions are obviously satisfied.
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 539
(c) The third condition means that j
r , that is
why condition (6) is nontrivial only for a 2
c .
(d) By the same reason, in condition 7 the inclusion a 2 Sp (1) can be
changed by a 2
c . Note that if j (a) = 0, then by the formal
Definition 2 the vector A has the zero a-top.
(e) The fourth and fifth conditions are implied by the third one but are
easier to verify. The simple (and strong) "arithmetic" necessary
controllability condition (5) can be verified by a single glance at
spectrum of the operator ad Bj L (1) .
(f) For solvable L under conditions (1), (2) the spectrum
is the same for all
homotheties. Then conditions
(3)-(5) depend on L but not on B.
(g) For the case of simple spectrum of the operator ad Bj L (1)
the necessary
controllability conditions take respectively the more simple
Corollary 1. Let a Lie group G be simply connected and its Lie algebra
L satisfy the condition L 6= L (1) . Suppose that the spectrum Sp (1) is simple.
If a system \Gamma is controllable, then:
(1) dimL
r ,
r ae
Theorem 1 and Corollary 1 will be proved in Subsec. 3.3.
Remark . Now we discuss the condition L (1) 6= L essential for this work
and motivated by its initial focus - solvable Lie algebras L. Consider a
Levi decomposition
It is well known (see, e.g., [20], Theorem 3.14.1) that the Levi decomposition
of the derived subalgebra is then
This means that
If a Lie algebra L is semisimple (i.e., rad obviously L
The converse is generally not true (although this is asserted by [21], Sec. 87,
Corollary 3). For example, for the Lie algebra R 3 \Phi s so(3) (which is the Lie
540 YU. L. SACHKOV
algebra of the Lie group of motions of the three-space) its derived subalgebra
coincides with the algebra itself. (This example was kindly indicated to the
author by A. A. Agrachev).
The main tools to obtain the necessary controllability conditions given
in Theorem 1 is the rank controllability condition and the hypersurface
principle.
The system \Gamma is said to satisfy the rank controllability condition if the
Lie algebra generated by \Gamma coincides with L:
Proposition 1. (Theorem 7.1, [2]). The rank controllability condition
is necessary for controllability of a system \Gamma on a group G.
Generally, the attainable set A lies (and has a nonempty interior, which
is dense in A) in the connected subgroup of G corresponding to the Lie
algebra Lie (A; B).
The hypersurface principle is formulated for the system \Gamma as follows:
Proposition 2. (Corollary 3.2, [18]). Let a Lie group G be simply con-
nected, L, and let the Lie algebra L have a codimension one subalgebra
containing B. Then the system RB is not controllable on
G.
The sense of this proposition is that under the hypotheses stated there
exists a codimension one subgroup of the group G which separates G into
two disjoint parts, is tangent to the field B, and is intersected by the field
A in one direction only. Then the attainable set A lies "to one side" of this
subgroup.
Notice that the property of absence of a codimension one subalgebra of
containing B is sufficient for controllability of \Gamma on a Lie group G with
cocompact radical; if G is additionally simply connected, this condition is
also sufficient (Corollary 12.6, [17]).
x 3.2. Preliminary lemmas. First we obtain several conditions sufficient
for existence of codimension one subalgebras of a Lie algebra L containing
a vector B 2 L.
Lemma 3.1. Suppose that L (1) +RB 6= L. Then there exists a codimension
one subalgebra of L containing B.
Proof. Denote by l the vector space L (1) +RB. We have [l; l] ae L (1) ae l, that
is why l is a subalgebra; any vector space containing l is a subalgebra of L
too. Since l 6= L, there exists a codimension one subspace l 1 of L containing
l. Then l 1 is the required codimension one subalgebra of L containing B.
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 541
Lemma 3.2. Let L (1) \Phi
r 6= L (1)
r , then there exists a
codimension one subalgebra of L containing B.
Proof. If L (1)
r 6= L (2)
r , then there exists a real eigenvalue a 0 2
r such
that L (1) (a 0
be a Jordan base of the operator
ad B(a 0
(We suppose, for simplicity, that the eigenvalue a 0 of the operator -
ad
is geometrically simple, i.e., matrix of this operator is a single Jordan block;
for the general case of several Jordan blocks the changes of the proof are
obvious.)
Consider the vector space
It follows from (4), (5) that the space l 1 is (ad B)-invariant. Additionally,
we have dim l
Then we define the vector spaces
f L (1) (a) j a 2
First, dim l that is why
Third, the space l 2 is (ad B)-invariant. That is why, by virtue of (6) and
(8), we obtain the chain
Hence, l 3 is the required subalgebra of L: it has codimension one (see (7))
and contains the vector B (see (6)).
In the following three lemmas we obtain conditions sufficient for violation
of the rank controllability condition, i.e., necessary for controllability.
542 YU. L. SACHKOV
Lemma 3.3. Suppose that
2 L (1) and let there exist a vector subspace
l 1 ae L such that the following relations hold
(1) L (2) ae l 1 ae L (1) ,
Then Lie
Proof. By condition (1),
i.e., l 1 is a Lie subalgebra.
Consider the vector space l = RB \Phi l 1 . We have (in view of condition
so l is a Lie subalgebra too.
By condition (3) we have Lie (A; B) ae l, and condition (2) implies l 6= L.
Hence, Lie (A; B) 6= L.
Lemma 3.4. Let
Proof. Consider the case of the complex a 0 2
c first. By the condition
the quotient operator
ad c B(a 0 ) has at least two cyclic
spaces V; W ae L (1)
c (a 0 )=L (2)
c (a 0 ). That is, there are two Jordan chains
and in these bases matrices of the operators -
ad c B(a 0 )j V and -
ad c B(a 0 )j W
are the Jordan blocksB
a
. a 0 0
a
. a 0 0
(Obviously, we can assume that the complex conjugate bases fv
and Jordan chains of the operator -
ad c B(a 0 ) in the complex
conjugate spaces V ; W ae L (1)
c (a 0 )=L (2)
c (a 0 ).)
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 543
Notice that
ad c B(a 0
ad c B(a 0
For the direct sum
c (a 0 )=L (2)
(other cyclic spaces of -
ad c B(a 0
consider the decomposition
(components in other cyclic spaces of -
ad c B(a 0 )):
We can assume that
This will be proved at the end of this proof. Now suppose that condition (14)
holds and, for definiteness, That is why
(other cyclic spaces of -
ad c B(a 0 ));
c (a 0 )=L (2)
and, in view of (10), (11),
ad c B(a 0
Now let l ae L (1)
c (a 0 ) be the canonical preimage of the space ~ l. Obviously,
(ad c B) l ae l;
c (a 0
Then we pass to realification:
544 YU. L. SACHKOV
Finally, for the space l 1 := l r \Phi
we obtain
(ad
all conditions of Lemma 3.3 are satisfied, and Lie (A; B) 6= L modulo the
unproved condition (14).
To prove this condition, suppose that A v1 6= 0 and Aw1 6= 0 in decomposition
(13). In view of symmetry between V and W , we can assume that
. Define the new basis in
A v1
It is easy to see that f~v is a basis of V , and
for the new basis f~v and the old basis fw g.
Now we show that the new basis is a Jordan one.
ad c B(a 0 )
ad c B(a 0 )
A v1
ad c B(a 0 )
A v1
(a
A v1
A v1
ad c B(a 0 )
ad c B(a 0 )
A v1
ad c B(a 0 )
A v1
a
A v1
ad c B(a 0 )
ad c B(a 0 )
A v1
ad c B(a 0 )
A v1
a
A v1
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 545
And if q
ad c B(a 0 )
ad c B(a 0 )
Finally, if
ad c B(a 0 )
ad c B(a 0 )
is a Jordan basis for -
ad c B(a 0 )j V , and
(13), as was claimed.
So the lemma is proved for the case of the complex eigenvalue a 0 .
And if a 0 is real, then the proof is analogous and easier: there is no need
in complexification and further realification.
Lemma 3.5. Let
Proof. The Jordan base of the operator -
ad c B(a 0 ) consists of one Jordan
chain
c (a 0 )=L (2)
and moreover, in the decomposition
we have A
ad c B(a) \Gamma a 0 Id)(L (1)
c (a 0 )=L (2)
Then in the same way as in Lemma 3.4 we denote by l the preimage of the
space under the projection L (1)
c (a 0 )=L (2)
and by l the complex conjugate to l in L c . Then the space
satisfies all hypotheses of Lemma 3.3, that is why Lie (A; B) 6= L.
x 3.3. Proofs of the necessary controllability conditions.
Proof of Theorem 1. Suppose that the system \Gamma is controllable on the group
G.
Items (1) and (2). If dimL (1)
L. It follows from Lemma 3.1 and the hypersurface principle (Proposition 2)
that \Gamma is not controllable. This contradiction proves items (1) and (2), and
allows to assume below in the proof that L (1) \Phi
(3). If L (2)
r 6= L (1)
r , then it follows from Lemma 3.2 and the hyper-surface
principle that \Gamma is not controllable.
immediately from item (3).
546 YU. L. SACHKOV
(5). From the previous item we have
r . But
consequently,
r ae
Items (6), (7) follow from Lemmas 3.4, 3.5 and from the rank controllability
condition (Proposition 1).
Proof of Corollary 1. If the spectrum Sp (1) is simple, then L (1)
for all a 2
r or Sp (1)
c respectively.
Further, L (2)
r is equivalent to Sp (2)
r , and top (A; a) 6= 0 iff
A(a) 6= 0, a 2 Sp (1) . Now Corollary 1 follows immediately from Theorem
1.
4. Sufficient controllability conditions
x 4.1. Main results. Under the necessary assumptions of Theorem 1, we
can give wide sufficient controllability conditions. Notice that the assumption
of simple connectedness can now be removed. So the below sufficient
conditions are completely algebraic; this is in contrast with the geometric
assumption (the finiteness of center of G) essential for the sufficient controllability
conditions for simple and semi-simple Lie groups G [4].
Theorem 2. Suppose that the following conditions are satisfied for a Lie
algebra L and a system \Gamma:
(1) dimL
(3) L (2)
r ,
c ,
c ,
(6) the operator adBj L (1) has no N-pairs of real eigenvalues.
Then the system \Gamma is controllable on any Lie group G with the Lie algebra
L.
The notation top (A; a) and the notion of N-pair used in Theorem 2 are
explained in Definitions 2 and 3 in Sec. 2.
Remarks. (a) Conditions (1)-(3) are necessary for controllability in the
case of a simply connected G 6= G (1) (by Theorem 1).
(b) Conditions (4) and (5) are close to the necessary conditions (6) and
(7) of Theorem 1 respectively. Notice that the fourth condition means that
all complex eigenvalues of ad Bj L (1)
are geometrically simple.
(c) Conditions (2) and (5) are open, i.e., they are preserved under small
perturbations of A and B.
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 547
(d) The most restrictive of conditions (1)-(6) is the last one. It can be
shown that the smallest dimension of L (1) in which this condition is satisfied
and preserved under small perturbations of spectrum of ad Bj L (1)
for
solvable L is (6). This can be used to obtain a classification of controllable
systems \Gamma on solvable Lie groups G with small-dimensional derived
subgroups G (1) .
(e) The technically complicated condition (6) can be changed by more
simple and more restrictive one, and sufficient conditions can be given as in
Corollary 2 below.
(f) Under the additional assumption of simplicity of the spectrum
the sufficient controllability conditions take the even more simple form presented
in Corollary 3 below.
Corollary 2. Suppose that the following conditions are satisfied for a
Lie algebra L and a system \Gamma:
(1) dimL
(3) L (2)
r ,
c ,
c ,
0g.
Then the system \Gamma is controllable on any Lie group G with the Lie algebra
L.
Corollary 3. Suppose that the following conditions are satisfied for a
Lie algebra L and a system \Gamma:
(1) dimL
(3) the spectrum Sp (1) is simple,
r ,
c ,
0g.
Then the system \Gamma is controllable on any Lie group G with the Lie algebra
L.
Theorem 2 and Corollaries 2, 3 will be proved in Subsec. 4.4.
x 4.2. Lie saturation. To prove the above sufficient conditions we use
the notion of the Lie saturation of a right-invariant system introduced by
V. Jurdjevic and I. Kupka. Now we recall the basic definition and properties
necessary for us (see details in [4], pp. 163-165).
548 YU. L. SACHKOV
Given a right-invariant system \Gamma ae L on a Lie group G, its Lie saturation
ae L is defined as follows:
cl A 8t 2 R+ \Psi
LS(\Gamma) is the largest (with respect to inclusion) system having the same
closure of the attainable set as \Gamma.
Properties of Lie saturation:
(2) LS(\Gamma) is a convex closed cone in L,
controllability condition:
x 4.3. Preliminary lemmas. In this section we assume that L 6= L (1)
(this condition holds, e.g., for solvable L). In view of Theorem 1, we suppose
additionally that dimL
First we present a necessary technical lemma.
Lemma 4.1. Let
lim
Z
(j=2)
lim
Z
Proof. Is obtained by the direct computation.
Now we prove the proposition that plays the central role in obtaining our
sufficient controllability conditions (Theorem 2). It is analogous to item (a)
of Proposition 11, [4].
Lemma 4.2. Let C 2 LS(\Gamma) " L (1) . Suppose that for any a 2
c the
following conditions hold :
(2) top (C; a) 6= 0 or L (1) (a) ae LS(\Gamma).
Suppose additionally that for the number
(or
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 549
we have
LS(\Gamma) oe
Proof. For simplicity suppose that
where
there are more
than two pairs of complex conjugate eigenvalues at the line f Re
the proof is analogous; if there are less than two pairs, then the proof is
obviously simplified.
So we have
(notice that if
respectively,
For any element D 2 L, nonnegative function g(t), and natural number
consider the limit
Z
It follows from the properties of the cone LS(\Gamma) (see Subsec. 4.2) that if D 2
LS(\Gamma) and the limit I(D;
Z
if the limit exists.
Introduce the notation
C a
Notice that
Z
For any bounded nonnegative function g(t) and any p 2 N we have
is equal to the
size of the maximal Jordan block of the operator ad c Bj L (1)
c
corresponding
to the eigenvalues c 2 Sp (1) with Re c ! r. That is why
lim
Z
Consequently,
Z
if the limit exists.
Now we choose the bases f x in
the spaces L (1) (a) and L (1) (b) in which matrices of the operators adBj L (1) (a)
and adBj L (1) (b) are the Jordan blocksB
. M r;fi 0
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 551
where
fi. In the bases f x and
we have
(the prime denotes transposition
of vectors and matrices). Then in the base
of the space L (1) (a) \Phi L (1) (b) we have
C a (t)
e
oe ff (t)C X1
oe ff (t)(tC X1
oe ff (t)
oe fi (t)C Z1
oe fi (t)(tC Z1 +CZ2 )
oe fi (t)
l, then the below argument can easily be modified).
(A) Now we show that span(x k ; y k
According to the hypotheses of this lemma, we have L (1) (a) ae LS(\Gamma)
or top (C; a) 6= 0. If L (1) (a) ae LS(\Gamma), then span(x That
is why we suppose below that top (C; a) 6= 0, which means in the base
that CX1 6= 0.
Taking
into account (21), (22), and Lemma 4.1, we obtain
Z
where
552 YU. L. SACHKOV
By virtue of the fact that the convex conic hull of vectors (23) for the
matrices M of the form (24) and (25), jjj - 1, is the plane span(x k ; y k ), we
We take v(t) 2 LS(\Gamma) equal to
i.e., to the component of vector (22) in the plane span(x k ; y k ), and repeat
the limit passage described in (A) replacing I(C; g;
and obtain span(x
We repeat process (B) with I(C; g; p; v), where v(t) is the component
of vector (22) in the plane span(x is decreasing from
obtain the inclusion span(x l+1 ; y
(D) We apply process (C) with l and using the functions g(t) of
the
We decrease p and repeat procedure (D) until
In view of (16), the proof of the lemma is completed.
We can give several sufficient conditions for an element B not to have
real N-pairs of eigenvalues. These conditions can be verified simply by the
picture of spectrum of the operator ad Bj L (1)
in the complex plane. We need
them to obtain Corollary 2.
Lemma 4.3. Suppose that
r . Then any one of
the following conditions is sufficient for the operator ad Bj L (1) not to have
real N-pairs of eigenvalues:
(1)
(2)
(3) 0g.
Proof. The first case (Sp (1)
obvious as there are no real eigenvalues
at all.
Case 2. Let (ff; fi) be a real N -pair, 0 ! ff - fi. We have
Jacobi's identity implies that [L (1) (a); L (1) (b)] ae L (1) (a + b), and the spaces
direct sum, that is why
L (2) (ff) ae
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 553
From the conditions a
ff. That is why (26) gives
L (2) (ff) ae
This contradicts item (2) of Definition 3.
Case 3 is considered analogously.
x 4.4. Proofs of the sufficient controllability conditions.
Proof of Theorem 2. We show that LS(\Gamma) oe L (1) .
Introduce the following numbers and sets:
Suppose that LS(\Gamma) 6oe L (1) , then
Recall that we have the following decomposition of the vector A corresponding
to root subspaces of the operator adBj L (1) :
Define the element
Notice that A 1 2 LS(\Gamma) since all terms in the right-hand side belong to
LS(\Gamma). In addition, we have A 1 2 L (1) . Consider the decomposition
For any a 2 Sp (1) we have
Re a 2 [n; m]
Re
According to condition 6 of this theorem, the pair of real numbers (n; m)
is not an N -pair. That is why at least one of conditions (1)-(3) of Definition
3 is violated. Now we consider these cases separately and come to a
contradiction.
(1) Let condition (1) of Definition 3 be violated, i.e., n 62 Sp (1) or m 62
. Suppose, for definiteness, that m 62
Apply Lemma 4.2 with m. Then we have
LS(\Gamma) oe
f L (1) (a) j a 2
f L (1) (a) j a 2
which is a contradiction to (28).
If n 62 we come to a contradiction with (27) analogously. That is
why case (1) is impossible.
(2) Let now n; m 2 Sp (1) and let condition (2) of Definition 3 be violated,
i.e.,
But for Re -;
we have L (1) (-); L (1) (-) ae LS(\Gamma) (by definitions
(27) and (28)), consequently, L (2) (n) ae LS(\Gamma). According to hypotheses of
this theorem, L (1) that is why
Consider the vector A
Now we apply Lemma 4.2 with
LS(\Gamma) oe
f L (1) (a) j a 2
Then, by virtue of (29), we have
LS(\Gamma) oe
f L (1) (a) j a 2
which is a contradiction with (27). That is why case (2) is impossible, and
condition (2) of Definition 3 cannot be violated.
(3) We prove analogously that condition (3) of Definition 3 cannot be
violated as well.
Hence, all three conditions of Definition 3 hold, and (n; m) is a real N-pair
of eigenvalues. This is a contradiction with condition (6) of this theorem.
That is why LS(\Gamma) oe L (1) . But
controllable by the controllability condition (15).
Proof of Corollary 2 follows immediately from Theorem 2 and Lemma 4.3.
Proof of Corollary 3 is obvious in view of Corollary 2.
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 555
5. Examples and applications
x 5.1. Metabelian groups. Solvable Lie algebras L having the derived
series of length 2:
are called metabelian. A Lie group with a metabelian Lie algebra is also
called metabelian.
Our previous results make it possible to obtain controllability conditions
for metabelian Lie groups.
Theorem 3. Let G be a metabelian Lie group. Then the following conditions
are sufficient for controllability of a system \Gamma on G:
(1) dimL
(3)
c ,
c .
If the group G is simply connected, then conditions (1)-(5) are also necessary
for controllability of the system \Gamma on G.
The notation top (A; a) used in Theorem 3 is explained in Definition 2 of
Sec. 2.
Proof. The sufficiency follows from Corollary 2.
In order to prove the necessity for the simply connected G suppose that
\Gamma is controllable.
Conditions (1) and (2) follow then from items (1) and (2) of Theorem 1.
Condition (3) follows from item (3) of Theorem 1 and from the metabelian
property of G:
Condition (4). For any a 2
c we have L (2) f0g, that is why
j (a) is equal to geometric multiplicity of the eigenvalue a of the operator
ad Bj L (1) (a) , i.e., to dimL c (a). By item (6) of Theorem 1, we have
that is why dimL c (a) = 1.
Condition (5). For any a 2
c we have j (a) = 1, then, by item (7) of
Theorem 1, we obtain top (A; a) 6= 0.
Example. Let l be a finite-dimensional real Lie algebra acting linearly in
a finite-dimensional real vector space V . Consider their semidirect product
l. It is a subalgebra of the Lie algebra of affine transformations
of the space V since L ae V \Phi s gl (V ). If l is Abelian, then L is metabelian:
In the following subsection we study in detail a particular case when l is
one-dimensional.
x 5.2. Matrix group. Now we apply the controllability conditions from
the previous subsection to some particular metabelian matrix group. To
begin with we describe this group.
Let V be a real finite-dimensional vector space,
linear operator in V . The required metabelian Lie algebra is the semi-direct
product
RM
(compare with the example at the end of the previous subsection).
Now we choose and fix a base in V , and denote the matrix of the operator
M in this base by the same letter M . Then L(M ) can be represented as
the subalgebra of gl (n generated by the following matrices:
(Recall that E ij is the n \Theta n matrix with the only unit entry in the ith line
and the jth raw.) Obviously, we have
Notice also that [y and M is the matrix of
the adjoint operator ad xj L (1) in the base f y g. In the sequel we
consider the Lie algebra L(M in this matrix representation.
Let G(M ) be the connected Lie subgroup of GL(n corresponding
to L(M ). The group G(M ) can be parametrized by the matrices
It is a semidirect product:
n\Omega
The group G(M ) is not simply connected iff the one-parameter subgroup G 1
is periodic, which occurs iff the matrix M has purely imaginary commensurable
spectrum. More precisely, we say that a set of numbers (b
R n is commensurable if
(b
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 557
And the group G(M ) is not simply connected iff
the set Im(Sp(M )) is commensurable:
oe
Before studying controllability conditions for the group G(M ) we present
an auxiliary proposition, which translates the Kalman condition (equivalent
both to controllability and to rank controllability condition for linear systems
into the language of eigenvalues of
the matrix A and of components of the vector b in the corresponding root
spaces). We will apply this proposition below to reformulate our controllability
conditions for right-invariant and bilinear systems.
Lemma 5.1. Let A be a real n \Theta n matrix, b 2 R n . Then the Kalman
condition
rank (b;
is equivalent to the following conditions:
(1) the matrix A has a geometrically simple spectrum,
(2) top (b; -) 6= 0 for any eigenvalue - 2 Sp(A).
By analogy with Definition 2 in Sec. 2, we say that top (b; -) 6= 0 if the
component b(-) of the vector b in the root space R n (-) corresponding to
the eigengalue - satisfies the condition
i.e., the vector b(-) has a nonzero component corresponding to the highest
adjoined vector in the (single) Jordan chain of the operator A corresponding
to -.
To prove Lemma 5.1, we cite the following
Proposition 3. (Hautus Lemma, [22], Lemma 3.3.7.) Let A be a complex
n \Theta n matrix, b 2 C n . Then the Kalman condition (31) is equivalent
to the condition
Proof of Lemma 5.1. In view of Proposition 3, we prove that condition (32)
is equivalent to conditions (1), (2) of Lemma 5.1.
First, we suppose that all eigenvalues of A are real; otherwise we pass
to complexification. Second, the Kalman condition (31) preserves under
558 YU. L. SACHKOV
changes of base in R n . That is why we assume that the matrix A is in the
Jordan normal form:
A =B @
. - l 0
Then the n \Theta (n + 1) matrix in condition (32) is represented as
denotes projection of the vector b onto the root
space of the matrix A corresponding to the eigenvalue - l .
Necessity. We assume that rank
conditions (1), (2) of Lemma 5.1.
1. If spectrum of A is not geometrically simple, then -
j. Then the matrix OE(- i ) has two zero columns, and rank OE(-
2. Suppose that the vector b has the zero -top for some - 2 Sp(A); for
definiteness, let top (b; the first component of b in the chosen
Jordan base equals to zero, and the first raw of the matrix OE(- 1 ) is zero.
Hence rank OE(- 1
Sufficiency. If conditions (1), (2) of Lemma 5.1 hold, then it is easy to
see from representation (33) that all matrices OE(- l ),
linearly independent columns and condition (32) is satisfied.
Now we obtain controllability conditions for the universal covering
and for the group G(M ) itself.
Theorem 4. Let M be an n \Theta n matrix,
is controllable on G if and only if the following
conditions hold :
(1) the matrix M has a purely complex geometrically simple spectrum,
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 559
For the group G(M ) conditions (1)-(3) are sufficient for controllability; if
conditions (30) are violated, then (1)-(3) are equivalent to controllability on
The notation top (A; -) used in Theorem 4 is explained in Definition 2 in
Sec. 2.
Remark . By Lemma 5.1, conditions (1)-(3) of the above theorem are
equivalent to the following ones:
(1) the matrix M has a purely complex spectrum,
Proof of Theorem 4. Theorem 3 (see Subsec. 5.1) is applicable to the group
G(M ), and condition (1) of Theorem 3 is satisfied.
Decompose the vector B 2 L using the base of L:
2 L (1) is equivalent to B x 6= 0. Moreover, in view of the metabelian
property of L,
By virtue of Theorem 3, the system \Gamma is controllable on G if and only if
the following conditions hold:
(3) the matrix M has a geometrically simple spectrum,
Now the proposition of the current theorem for -
For G(M ), controllability is implied by controllability on its universal
covering
conditions (30) are violated, then G(M
Let now conditions (30) be satisfied. Then the group G(M ) is a semi-direct
product of the vector group R n and the one-dimensional compact
group G 1 . But controllability conditions on such semi-direct products were
obtained by B. Bonnard, V. Jurdjevic, I. Kupka, and G. Sallet [6]: if the
compact group has no fixed nonzero points in the vector group (which is just
the case), then the controllability is equivalent to the rank controllability
condition (Theorem 1, [6]).
560 YU. L. SACHKOV
So we have complete controllability conditions of systems of the form
on the group G(M ) and its simply connected covering
In the simply connected case (i.e., when conditions (30) are violated) we
have Theorem 4, and otherwise the theorem of B. Bonnard, V. Jurdjevic,
I. Kupka, and G. Sallet [6] works.
x 5.3. Bilinear system. Now we apply the controllability conditions for
the group G(M ) and study global controllability of the bilinear system
where A is a constant real n \Theta n matrix and b 2 R n .
Theorem 5. The system \Sigma is globally controllable on R n if and only if
the following conditions hold :
(1) the matrix A has a purely complex spectrum,
Remark . By Lemma 5.1, conditions (1)-(2) of this theorem can equivalently
be formulated as follows:
(1) the matrix A has a purely complex geometrically simple spectrum,
Proof of Theorem 5. We use the hypotheses of this theorem in the equivalent
form given in the above remark.
Sufficiency. Consider the bilinear system
where
are (n matrices. It is easy to see that the system \Sigma is
globally controllable on R n iff the system \Sigma is globally controllable in the
n-dimensional affine plane
(R
Consider the matrix Lie algebra L(A) and the corresponding Lie group
described in the previous subsection. We have
ae L(A) is a right-invariant system on the group G(A). Theorem 4
ensures that under hypotheses (1), (2) of the current theorem the system
is controllable on the group G(A). But the group G(A) acts transitively
in the plane (R , and the bilinear system \Sigma is the projection of the
right-invariant system \Gamma from the group G(A) onto the plane (R That
is why controllability of \Gamma on G(A) implies controllability of \Sigma on (R
Thus \Sigma is globally controllable on R n .
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 561
Necessity. Assume that \Sigma is globally controllable on R n .
(1a) First we show that the matrix A has no real eigenvalues. Suppose
there is at least one eigenvalue a 2 We choose a Jordan base
of the matrix A and denote by f x the corresponding
coordinates in R n . Let e k denote the maximum order root vector
coresponding to the eigenvalue a:
and k is the maximal possible integer. Then the system \Sigma implies
where b k is the kth coordinate of the vector b in the base f e g.
Now it is obvious that at least one of the half-spaces f x
is positive invariant for the system \Sigma, i.e., this system is not controllable.
(1b) Now we show that the spectrum Sp(A) is geometrically simple. Suppose
that for some (complex) eigenvalue - 2 Sp(A) there are at least two
linearly independent eigenvectors. Then we apply the same transformation
of Jordan chains as in Lemma 3.4 to obtain the zero component of the vector
b in the two-dimensional subspace of R n spanned by the pair of the highest
order root vectors of the matrix A (see conditions (13), (14)). Now if x k , y k
are the coordinates in R n in the transformed Jordan base corresponding to
the above-mentioned two-dimensional subspace, then the system \Sigma yields
Hence it follows that the codimension two
subspace f x is (both positive and negative) invariant for the
system \Sigma, and so it is not controllable.
(2) Finally, we show that the vector b has a nonzero -top for any eigenvalue
this is not the case, we choose any Jordan chain in the
root space corresponding to -, apply the argument from item 1.b) above,
and show that \Sigma is not controllable.
The necessity and sufficiency are now completely proved.
x 5.4. The Euclidean group in two dimensions. It is interesting to
consider the work of the above general theory for the visual three-dimensional
case.
E(2) be the Euclidean group of motions of the plane
R 2 . E(2) is connected but not simply connected. It can be represented as
the group of 3 \Theta 3 matrices of the form
sin t cos t s 2
562 YU. L. SACHKOV
where
sin t cos t
The corresponding matrix Lie algebra L is spanned by the matrices
Consider the system
E(2) - the universal covering of
E(2). A complete characterization of controllability of \Gamma on g
E(2) is derived
Theorem 4.
Theorem 6. The system \Gamma is controllable on g
E(2) if and only if the
vectors A, B are linearly independent and
Let us compare the controllability conditions for g
E(2) with the following
conditions for E(2) derived from Theorem 1, [6]:
Theorem 7. The system \Gamma is controllable on E(2) if and only if the
vectors A, B are linearly independent and span(A; B) 6ae span(y; z).
Finally, Theorem 5 gives the following geometrically clear proposition.
Theorem 8. The system
is controllable on the plane R 2 if and only if :
(1) the matrix A has a purely complex spectrum,
Acknowledgment
. The author thanks Professor G'erard Jacob and Laboratoire
d'Informatique Fondamentale de Lille, Universit'e Lille I, where this
paper was started, for hospitality and excellent conditions for work. The
author is also grateful to Professor A. A. Agrachev for valuable discussions
of the results presented in this work.
CONTROLLABILITY OF RIGHT-INVARIANT SYSTEMS 563
--R
System theory on group manifolds and coset spaces.
Control systems on
Control systems subordinated to a group action: Acces- sibility
Transitivity of families of invariant vector fields on the semidirect products of
Controllability of right invariant systems on real simple
Controllability on classical
Controllability of right invariant systems on real simple
Controllability of nilpotent systems.
Controllability of linear vector fields on
Foundations of
Notes Math.
Controllability of systems on a nilpotent Lie group.
Controllability on real reductive
Maximal subsemigroups of
Controllability of hypersurface and solvable invariant systems.
Mathematical control theory: Deterministic finite dimensional systems.
--TR
Mathematical control theory: deterministic systems
--CTR
Dirk Mittenhuber, Controllability of Solvable Lie Algebras, Journal of Dynamical and Control Systems, v.6 n.3, p.453-459, July 2000
Yu. L. Sachkov, Classification of Controllable Systems on Low-Dimensional Solvable Lie Groups, Journal of Dynamical and Control Systems, v.6 n.2, p.159-217, April 2000
Dirk Mittenhuber, Controllability of Systems on Solvable Lie Groups: The Generic Case, Journal of Dynamical and Control Systems, v.7 n.1, p.61-75, January 2001 | right-invariant systems;controllability;lie groups;bilinear systems |
608323 | Heuristic Methods for Large Centroid Clustering Problems. | This article presents new heuristic methods for solving a class of hard centroid clustering problems including the p-median, the sum-of-squares clustering and the multi-source Weber problems. Centroid clustering is to partition a set of entities into a given number of subsets and to find the location of a centre for each subset in such a way that a dissimilarity measure between the entities and the centres is minimized. The first method proposed is a candidate list search that produces good solutions in a short amount of time if the number of centres in the problem is not too large. The second method is a general local optimization approach that finds very good solutions. The third method is designed for problems with a large number of centres; it decomposes the problem into subproblems that are solved independently. Numerical results show that these methods are efficientdozens of best solutions known to problem instances of the literature have been improvedand fast, handling problem instances with more than 85,000 entities and 15,000 centresmuch larger than those solved in the literature. The expected complexity of these new procedures is discussed and shown to be comparable to that of an existing method which is known to be very fast. | INTRODUCTION
.
analysis is to partition a set of entities into subsets, or clusters, such that the subsets are
homogeneous and separated one another, considering measurements describing the entities. This
problem already preoccupies Aristotle and appears in many practical applications. For instance, it
has been studied by naturalists in the XVIII th century for classifying living species. In this paper, we
propose new efficient methods for centroid clustering problems. More precisely, we are going to
apply our methods to problems of the following type: given n entities e i with weights w
it is searched p centres c j (j = 1, ., p) minimizing , where
measures the dissimilarity between e i and c j . However, the methods are very general and
may be applied to other problems or objective functions.
If the entities are described by their co-ordinates in IR m , d(e i , c j ) is typically the distance or the
square of the distance between e i and c j . In the last case, the problem is the well known
sum-of-squares clustering (SSC) (see e.g. Ward (1963), Edwards and Cavalli-Sforza (1965), Jancey
(1966), MacQueen(1967)). There are many commercial softwares that implement approximation
procedures for this hard problem. For instance, the popular S-Plus statistical analysis software incorporates
the k-means iterative relocation algorithm of Hartigan (1975) to try to improve the quality of
1. A former version of the article was entitled "Heuristic methods for large multi-source Weber problems".
given clusters. For exact algorithms for SSC, see e. g. Koontz, Narendra and Fukunaga (1975) and
Diehr (1985).
In case: 1) the space is IR 2 , i. e. the Euclidean plane, 2) the centres can be placed everywhere in
the dissimilarity measure is the Euclidean distance, the problem is called the multi-source
Weber problem (MWP). This problem occurs in many practical applications, such as the placement
of warehouses, emitter antennas, public facilities, airports, emergency services, etc. See e. g. Saaty
(1972), Dokmeci (1977), Fleischmann and Paraschis (1988), Bhaskaran (1992) and Lentnek,
MacPerson and Phillips (1993) that describe practical applications that need to solve MWPs with up
to more than 1700 entities and 160 centres. For exact methods solving the MWP, see e. g. Rosing
(1992) and Krau (1997). For a unified comparison of numerous approximation algorithms, see Brim-
berg et al. (1997).
In case dissimilarities between entities are given by an arbitrary n - n matrix and the centres can
be placed on the entities only, the problem is called the p-median problem (PMP). The last is a
well-known NP-hard problem, see e. g. Hakimi (1965), ReVelle and Swain (1970), Mirchandani and
Francis (1990) and Daskin (1995). For exact methods solving the PMP, see e. g. Erlenkotter (1978),
Rosing, ReVelle and Rosing-Vogelaar (1979), Beasley (1985) and Hanjoul and Peeters (1985). For
an introduction to location theory and clustering see also Gordon (1981), Sp-th (1985), Wesolowsky
(1993).
The new methods presented in this paper, candidate list search (CLS), local optimization (LOPT)
and decomposition/recombination (DEC), have been successfully applied to SSC, MWP and PMP,
but they can be extended to solve other problems. For example, the CLS and LOPT methods can be
applied to any location-allocation problems as soon as two appropriate procedures are available: the
first one for allocating entities to centres and the second one for optimally locating a centre, given the
entities allocated to it. For SSC, MWP or PMP, the allocation procedure simply consists in finding
the nearest centre to each entity. For other problems, this procedure must be more elaborated (e. g. if
there is a constraint limiting the sum of the weights of the entities allocated to a centre).
The LOPT method proceeds by local optimization of subproblems. This is a general optimization
method that can be applied to problems not directly related to clustering. For example,
multi-depot vehicle routing problems can be approached along its lines: the depots being identified
with the centres and the optimization of sub-problems being a procedure customized for solving relatively
small multi-depot vehicle routing problems. Also, the approach of Taillard (1993) for solving
large vehicle routing problems can be viewed as a special application of LOPT where the centres are
identified with the centres of gravity of the vehicle tours and the optimization procedure being an
efficient taboo search solving vehicle routing problems with few vehicles.
In order to remain relatively concise, we are going to present applications of our methods for
PMP, SSC and MWP only, but with a special attention to the under studied MWP. Indeed, while the
MWP by itself does not embrace all of the problem features found in some practical applications,
this model can be very useful, especially for real applications dealing with many thousands of enti-
3ties. In Figure 1, we show the decomposition into 23 clusters of a very irregular problem built on real
data, involving 2863 cities of Switzerland. The large black disks are the centres while the small disks
are the cities (or entities). Cities allocated to the same centres have the same colour. In this figure, we
have also added the federal frontiers and the lakes. Politically, Switzerland is composed of 23 states
physically, it is composed of extremely thickly populated regions (Plateau) and regions
without cities (Alps, lakes). We see in Figure 1 that the positions of the centres are sensible (no centres
are located outside Switzerland or on a mountain or in a lake) and that the decomposition generally
respects the natural barriers (spaces without cities); there are very few entities that are separated
from their centre by a chain of mountains 2 .
Figure
1 has to be compared with Figure 2 showing the decomposition of Switzerland into the
same number of clusters obtained by solving a PMP with dissimilarity measure being the true shortest
paths (the road network having more than 30000 connections). We see in this figure that the PMP
solution is very similar to the MWP one (21 centres are placed almost at the same position; the main
difference is that there are less entities allocated to a centre located on the other border of a lake).
However, solving this PMP is time consuming: the computation of the shortest paths matrix took
2. The expert can even identify a number of Swiss Cantons in this figure. There are however differences that could be
appropriate for solving political problems, such as the union of the South part of Jura to the Canton of Jura, the separation
of the German-speaking part of Valais or the union of the small primitive Cantons.
Figure
1: Decomposition of Switzerland into 23 clusters by solving a multi-source Weber Problem.
100 times longer than finding a very good MWP solution. Therefore, solving an MWP in a first
phase before attacking the true problem (as exemplified by a PMP or a multi-depot vehicle routing
problem) can be pertinent, even with an irregular, real problem.
Since the clustering problems treated in this paper are difficult, they can be solved exactly for
instances of moderate size only. For solving larger instances, as often arise in practice (see the 6800
entities, 380000 network nodes instance of Hikada and Okano (1997)), it is appropriate to use heuristic
methods. However, most of the methods of the literature present the same disadvantage of a
large increase of the computing time as the number of centres increases and, simultaneously, a
decrease in the quality of the solutions produced. The aim of this paper is to show that it is possible
to partition a problem with a large number of centres into subproblems that are much smaller, in
order to benefit from the advantages of the existing methods for small problems while rapidly producing
solutions of good quality to the original problem.
The article is structured as follows: in Section 2, we present in detail the alternate location-allocation
(ALT) procedure used as a subprocedure of our candidate list search (CLS), showing how it
can be implemented efficiently. ALT was first proposed by Cooper (1963) for the MWP. However, it
can be generalised for any location-allocation problem as soon as a location procedure and an allocation
procedure are available. In this section, we also present CLS, our basic procedure for solving the
subproblems generated by partition methods. In Section 3, we present two partition methods for
large problems. The first one, LOPT, can be viewed either as a generalization of the ALT procedure
Figure
2: Decomposition of Switzerland into 23 clusters obtained by solving a PMP with true shortest paths.
or as a restricted CLS for the post-optimization of a given solution. The second decomposition
method, DEC, splits a large problem into independent subproblems and the solutions of these sub-problems
are optimally mixed together to create a solution to the original problem. Section 4 analyses
the computational performances of the methods proposed.
2. BASIC PROCEDURES ALT AND CLS.
The procedures ALT and CLS are used as subprocedures in the decomposition methods we pro-
pose. Referring to the paper of Cooper (1963) is not sufficient to understand the procedure ALT well,
since certain details of this algorithm are not discussed in the original paper and the choices made for
implementing the procedure can have a profound impact on its effectiveness. Moreover, we have
adapted this procedure to accelerate its execution.
Generalized ALT procedure .
The iterative location-allocation procedure of Cooper (1963) may be sketched as follows:
Cooper has designed this algorithm for the MWP. In this case, the location procedure can be implemented
using a procedure like those of Weiszfeld (1937). For the SSC, the centre of gravity of the
entities is the optimum location of the centre. For the PMP, the optimum location of a centre can be
obtained by enumerating all possible location for the centre. The allocation procedure is very simple
for SSC, PMP and MWP: each entity is allocated to its nearest centre. For other problems, this procedure
can be more difficult to implement.
Two steps of this algorithm have to be discussed: the choice of the initial solution at step 1, and the
repositioning of centres that are not used at step 2a. For the choice of an initial solution, many variants
have been tested:
Position the centres on p randomly elected entities; the probability of choosing an entity being
proportional to its weight.
Choose the position of the centres one by one, by trying to position them on an entity and by
electing the position that minimizes the objective function.
The first variant takes into account the structure of the problem, i. e. the geographical and
weighting spread of the entities. It produces relatively good initial solutions, especially for problems
with non uniform weights.
Input: Set of entities with weight and dissimilarity measure,
problem specific allocation and location procedures.
Choose an initial position for each centre.
Repeat the following steps while the location of the centres
varies:
2a) Allocate the entities given the centre locations.
2b) Given the allocation made at step 2a, locate each centre
optimally.
Algorithm 1:Locate-allocate procedure of Cooper, 1963, (ALT).
The second variant induces the ALT procedure to produce the best solutions on the average but
its computing time is high: for each of the p centres, O(n) positions have to be tried, and for each of
these positions, one has to verify whether each entity is serviced by the new position. This implies a
procedure that operates in O(n 2 -p) time, while the other variant can be done much faster. To reduce
the complexity of this variant and to make it non deterministic 3 , we adopt the following O(n-p)
greedy procedure in the spirit of those of Dyer and Frieze (1985):
After having repositioned the centres at step 2b of the ALT procedure, it may happen that the allocation
of the next iteration, at step 2a, does not use all the centres. The unused centres can be relocated
to improve the current solution. We have adopted the following policy:
Determine the centre that contributes most to the objective function and place an unused centre
on its most distant entity; re-allocate the entities and repeat this as long as unused centres exist.
Starting with a very bad initial solution (O(p) centres that are not used), this re-location policy
could lead to a O(p 2 -n) procedure. However, our initial solution generator (as well as our CLS procedure
presented below) furnish solutions to the ALT procedure that contain an unused centre only
exceptionally (for the MWP, we have observed somewhat less than one occurrence in 1000, even for
a large number of centres). So, the re-location policy has almost no influence on the solution quality,
if one starts with a "good" initial solution as we do. Mladenovic and Brimberg (1996) have shown
that the re-location policy can have a substantial effect on MWP solution quality if one starts with
"bad" initial solutions.
Complexity of ALT for PMP, SSC and MWP.
First, let us introduce a new complexity notation: In the remaining of the paper, let -(.) denote
an empirically estimated complexity, while O(.) denotes the standard worst case complexity. For
example, both quick sort and bubble sort algorithms operate in O(n 2 ) time. In practice however, it is
3. In the context we use ALT, it is more interesting to have a non deterministic procedure. First, it may happen
that ALT is called many times for solving the same (sub-)problem. With a non deterministic procedure, it is
avoided to repeat exactely the same work. Then, let us mention that only non deterministic procedure can solve
NP-hard problems in polynomial time if P - NP. Therefore, our personal view is to consider non deterministic
procedure potentially more interesting than deterministic ones, even if there is no theory supporting this for the
moment.
Input: Set of entities with weight and dissimilarity measure.
Choose an entity at random and place a centre on this entity.
Allocate all entities to this centre and compute their weighted
dissimilarities.
3a) Find the entity that is the farthest from a centre (weighted
dissimilarities) and place the k th centre at that entity's
location.
3b) For is allocated to a centre farther
than centre k:
Allocate entity i to centre k and update its weighted dissimilarity
Algorithm 2:Initial solution generator.
observed that quick sort has an -(n-log(n)) behaviour while bubble sort has an -(n 2 ) behaviour
(Rapin, 1983) 4 . There are also algorithms for which the theoretical worst case complexity is not
established. However, observing the average computing times by executing an algorithm on many
instances can provide a good idea of its complexity in practice. The advantage of this notation is to
make a distinction between practice and theory. Indeed, it is common to read that the complexity of
quick sort is O(n-log(n)), which is not true, formally. Moreover, the "^" notation is often used by
statisticians for estimated values.
The complexity of the ALT procedure can be estimated as follows. The complexity of Step 2a
(allocation of the entities to a centre) is O(p-n). Indeed, for the problems under consideration one has
to allocate each entity to its nearest centre. For large values of p, this step can be substantially accelerated
by observing that only the centres that have moved from one iteration to the next can modify
the allocation previously made. (Compares the computing times of old and new ALT implementations
in Table 3.)
Step 2b can be performed in O(n) for the SSC. Indeed, each entity contributes only once in the
computation of the position of each centre (independently from the number of centres). For the
MWP, the optimum location can be found with a Weiszfeld-like procedure (1937) that repeats an
unknown number of gradient steps. We have arbitrarily limited this number to 30. So, in our imple-
mentation, Step 2b has a complexity of O(n). For small values of p, the computing time of this step
dominates. For the PMP, let us suppose that O(n/p) entities are allocated to each centre (this is reasonable
if the problem is relatively regular). 5 For each centre, one has to scan O(n/p) possible locations
and the evaluation of one position can be performed in O(n/p). So, the total complexity of
Step 2b is locating the p centres.
Since p is bounded by n, the global complexity of steps 2a and 2b is bounded by O(n 2 ) for SSC,
MWP and PMP. Now, we have to estimate the number of repetitions of Loop 2 which is unknown.
However, in practice, we have observed that the number of iterations seems to be polynomial in n
and p. Therefore, we will use an -(p a -n b ) estimation of the overall complexity of our implementation
of the ALT procedure. In this study, we are mostly interested in instances with large values of p,
so, we have considered instances with n/5 - p - n/3 for evaluating the a and b values for the various
clustering problems. For the SSC and MWP, we have considered about 7000 instances uniformly
generated with up to 9400 entities. For the PMP, we have considered about 38000 runs of the ALT
procedure. The PMP instances were based on the 40 different distance matrices proposed by Beasley
(1985). The number of entities for these instances ranges from 100 to 900.
4.More precisely, such a behaviour can be mathematically proven. In that case, we propose to follow the usual
notation in statistics and to write for an expected running time derived from a mathematical analysis. Therefore
it can be written that the complexity of quick sort is .
5.Without this assumption, the complexity is higher; with stronger assumptions (e. g. Euclidean distances), a
lower complexity can be derived.
O .
O
log
For the SSC, we have estimated a @ 0.83 and b @ 1.19; for the PMP the estimation is a @ 0.70
and b @ 1.23 and for the MWP a @ 0.85 and b @ 1.34. So, if p grows linearly with n, the estimated
complexity of the ALT procedure is not far from -(n 2 ) for all these problem types. The memory
requirement is O(n) for the SSC and MWP and O(n 2 ) for the PMP i. e. equivalent to the data size.
. 2 . Candidate list search (CLS) .
CLS is based on a greedy procedure that randomly perturbs a solution that is locally optimal
according to the ALT procedure. Then, ALT is applied to the perturbed solution and the resulting
solution is accepted only if it is better than the initial one, otherwise one returns to the initial solu-
tion. The perturbation of a solution consists in eliminating a centre and in adding another one,
located on an entity. The process can be repeated until all pairs entities/centres have been scanned.
This greedy procedure finds very good solutions: In Table 1, we report the quality of the solutions
found when applied to the 40 PMP instances of Beasley (1985). These instances have been solved
exactly and the quality of a solution is given in per cent above the optimum value. The greedy procedure
was executed 20 times for each instance. For 8 instances each run found the global optimum
and all instances but one were optimally solved at least once.
For MWP instances with 50 (respectively 287) entities we observed that the greedy procedure
finds a global optimum in more than 60% (respectively 40%) of the cases. For the SSC, we succeeded
in improving all the best solutions known to 16 instances with 1060 entities and 10 to 160
centres (See Table 4).
For the p-median problem, this type of perturbation has been used for a long time (c. f. Goodchild
and Noronha, 1983, Whitaker, 1983, Glover, 1990, Vo-, 1996, Rolland, Schilling and Current,
1997); in this case Glover proposes an efficient way to evaluate the cost of eliminating a centre: during
the allocation phase, the second closest centre is memorized - this can be done without increasing
the complexity. However, evaluating the decrease of the cost due to the opening of a centre on an
entity takes a time proportional to n. Therefore, finding the best possible perturbation has a complexity
of O(n 2 -p), without considering the application of the ALT procedure.
This complexity is too high for large instances thus we make use of a candidate list strategy
scheme proposed by Glover (1990) for implementing a probabilistic perturbation mechanism. The
idea is to identify the centre to close by a non deterministic but systematic approach. The entity associated
with an open centre is also randomly chosen, but its weighted distance from its previously
Table
1: Quality of the greedy procedure for Beasley's PMP instances (% above optimum).
allocated centre must be higher than the average. The process is repeated for a number q of itera-
tions, specified by the user. Algorithm 3 presents CLS into details.
The most time consuming part of this algorithm is step 3e, i. e. the application of the ALT procedure
to the perturbed solution. As seen above, we can estimate the complexity of this step as
-(p a -n b ). Therefore, the complexity of CLS is -(q-p a -n b ). From now on, we write CLS(q) the
improvement of a given solution with q iterations of the CLS procedure.
3. DECOMPOSITION METHODS.
In this section, we propose two decomposition methods for solving problems with a large
number p of centres. The complexity of these methods is not higher than the ALT procedure while
producing solutions of much higher quality. The first decomposition technique, LOPT, starts with
any solution with p centres and improves it by considering a series of subproblems involving r < p
centres and the entities allocated to them. The subproblems are solved by our CLS algorithm. This
method can be viewed as a local search defined on a very large neighbourhood involving up to r centres
re-locations at a time. Another point of view is to consider this procedure as a generalization of
ALT. Indeed, a solution produced by ALT is locally optimal if we consider any subset of entities
allocated to a single centre: every entity is serviced by the nearest centre, and the centres are optimally
positioned for the subset of entities they are servicing. Our procedure produces a solution that
is sub-optimal (since the subproblems are solved in a heuristic way and since we do not consider all
subsets of r centres) for subsets of entities allocated to r centres. A third point of view is to consider
LOPT as a CLS procedure with a much smaller list of candidate moves regarding to the CLS presented
above.
The second decomposition method, DEC, partitions the problem into t smaller subproblems.
These subproblems are then solved with our CLS for various numbers of centres. A solution to the
(location of the p centres), parameter
q.
Generate p, a random permutation of the elements {1, ., p} and
-, a random permutation of the elements
the distance of the most (weighted) distant
entity.
3c) While the weighted distance from entity - i to the nearest
centre is lower than (d
3d) Close centre p j and open a new one located at entity - i to
obtain a perturbed solution s
with ALT to obtain s k ''
new random permutation p.
Algorithm 3:Candidate list search (CLS).
initial problem is then found by combining solutions of the subproblems. To decompose the initial
problem, we solve an intermediate problem with t centres with our CLS procedure. Each set of entities
allocated to a centre of the intermediate problem is considered as an independent subproblem.
3 . 1 . Local optimization (LOPT) .
The basic idea of LOPT is to select a centre, a few of its closest centres and the set of entities
allocated to them to create a subproblem. We try to improve the solution of this subproblem with
CLS. If an improved solution is found, then all the selected centres are inserted in a candidate list C,
otherwise the first centre used for creating the subproblem is removed from C. Initially, all the centres
are in C and the process stops when C is empty. LOPT has two parameters: r, the number of centres
of the subproblems and s, the number of iterations of each call to CLS. Algorithm 4 presents
more formally the LOPT method.
Complexity of LOPT.
To estimate the complexity of LOPT we make two assumptions. First we assume that O(n/p)
entities are allocated to each centre (this hypothesis is reasonable if the problem instance is relatively
uniform) and second that loop 3 is repeated -(p g -n l ) times. Empirically, we have observed that g is
less than 1 and l is close to 0 (see Table 8); we estimate that the value of g is about 0.9 and l is about
for the LOPT parameters we have chosen and for the MWP). Then, the complexity of LOPT can
be established as follows:
Steps 3a and 3d have a complexity of O(p); step 3b has a complexity of O(r-p); step 3c solves a
problem with r centres and O(r-n/p) entities, this leads to a complexity of -(s-r a -(r-n/p) b ). This leads
to a total complexity of -(r-p are fixed and if p grows linearly
with n, the complexity of the LOPT procedure is therefore -(n l This complexity seems
to be similar to that of the ALT procedure. In practice, step 3c of the LOPT procedure takes most of
the computing time, even if steps 3b has a higher expected complexity for extremely large p. Indeed,
for fixed n, we have always observed that the computing time diminishes as p increases, even for p
larger than 10000 (see Tables 4 to 8). From now on, we denote by LOPT(r, s) the version of the
LOPT procedure using parameters r and s. The memory requirement of the LOPT procedure is O(n).
initial position of the p centres, parameters r and s.
While C -, repeat the following steps:
3a) Randomly select a centre i - C.
3b) Let R be the subset of the r closest centres to i (i - R).
3c) Consider the subproblem constructed with the entities allocated
to the centres of R and optimize this subproblem with
r centres with CLS(s).
3d) If no improved solution has been found at step 3c, set
else set
Algorithm 4: Local optimization procedure (LOPT).
3 . 2 . Decomposition algorithm (DEC) .
LOPT optimizes the position of a given number of centres dynamically, but it is also possible to
proceed to a static decomposition of the entities, and solve these subproblems with a variable
number of centres. A solution to the complete problem may be found by choosing the right number
of centres for each subproblem. Naturally, the total number of centres must be limited to p. This
re-composition may be performed efficiently and optimally with dynamic programming.
The crucial phase of the algorithm is the first decomposition: if the subproblems created do not
have the right structure, it is impossible to obtain a good solution at the end. The more irregular the
problem is (i. e. where the entities are not uniformly distributed, or their weights differ widely), the
more delicate its decomposition is. For partitioning the problem, we use our CLS procedure applied
to the same set of entities but with a number t < p of centres.
The subproblems created may have very different sizes: a subproblem may consist of just a few
entities with very high weights or it may comprise a large number of close entities. Thus it could be
difficult to evaluate the number of centres to be assigned to a subproblem. Let n i be the
number of entities of subproblem i. Suppose that subproblem i is solved with
and let f ij be the value of the objective function when solving subproblem i with j centres. To
build a solution to the initial problem, we have to find
This problem is a kind of knapsack and may be reformulated as:
Thus, the problem can be decomposed and solved recursively by dynamic programming in
O(t-p) time. This procedure can also produce all the solutions with t, t centres in O(t-n)
time. Such a feature can be very useful when we want to solve a problem for which the number of
centres is unknown and must be determined, as for example when there is an opening cost for each
centre (the opening cost has just to be added in the f ij values).
However, solving each subproblem with 1, ., n i centres is time consuming. If the problem is
relatively uniform, one can expect that the optimum number of centres found by dynamic programming
is not far from p/t for all subproblems. So, we propose to first solve the subproblems for only
three different numbers of centres: -p/t - 1-p/t- and -p/t + 1-. These are solved with one less
minimize
such that j i p
minimize
minimize
such that j i
(respectively one more) centre when the optimum number of centres determined by dynamic programming
is exactly the lower (respectively higher) number for which a solution was computed.
Algorithm 5 presents our DEC procedure in details.
Complexity of DEC .
For analysing the complexity of DEC, we make the following assumptions: First, each subproblem
has O(n/t) entities, second, each subproblem is assigned O(p/t) centres and third, the number of
repetitions of loop 5 is a constant (i. e. the total number of subproblems solved with CLS in steps 3
and 5a is in O(t) ). These assumptions are empirically verified if the problem instances are relatively
uniform (see Table 8). With these assumptions, the complexity of DEC can be established as
follows: Step 1 is in -(u-t a -n b ); steps 3 and 5a can be performed in -(v-t 1 - a - b -p a -n b ); finally, the
complexity of dynamic programming, in steps 4 and 5b is O(t-p). The overall complexity of DEC
strongly depends on the parameter t. As shown in the next section, the quality of solutions produced
by CLS slightly diminishes as the number of centres increases. We therefore seek to reduce the
number of centres in the auxiliary problem and in subproblems as much as possible. For this pur-
pose, we have chosen . The overall complexity of our implementation of DEC is
are constant and p grows linearly with n, the complexity
is lower than the ALT procedure. The
memory requirement is O(n 3/2 ). DEC requires more memory than CLS and LOPT, but the increase is
not too high and we have succeeded in implementing all the algorithms on a personal workstation.
From now on, we note DEC(u, v) the use of the DEC procedure with parameters u
and v.
Input: Set of entities with dissimilarity measure.
Solve an auxiliary problem with t centres with CLS(u).
2) The subsets of entities allocated to the same centre form t
independent subproblems.
For each subproblem do: Solve subproblem i with CLS(v) with
centres and update the
associated.
Find a collection j 1 *, ., j t * of optimum number of centres to
attribute to each sub-problem with dynamic programming.
While
5a) For all (i,
using CLS(v) and update the f iji associated.
5b) Find a new collection j 1 *, ., j t * of optimum number of centres
to attribute to each subproblem with dynamic programming
Algorithm 5:The decomposition procedure (DEC).
4. NUMERICAL RESULTS.
4 . 1 . Test problems .
For the numerical results presented in this section, we consider six sets composed of 654, 1060,
2863, 3038, 14051 and 85900 entities respectively. The 2863 entities set is built on real data: the
entities are the cities of Switzerland and the weight of each city is the number of inhabitants. This set
is denoted CH2863.
9 130936.12 160 178764596.3 170 260281.77 170 1890823.0 5000 254125361
14 84807.669 250 126652250.1 300 184832.94 300 1404028.4 10000 166535699
50 29338.011 700 41923352.49 800 94301.618 800 824127.49
70 21465.436 900 28112316.53 1000 78458.720 1000 725300.72
90 17514.423 2000 475580.22
100 16083.535 U1060 2500 409677.92
Table
2: Number of centres and best solution values of the MWP instances.
The other sets correspond to the travelling salesman problems that can be found under the names
of P654, U1060, Pcb3038 Brd14051 and Pla85900 in the TSPLIB compiled by Reinelt (1995). For
these sets, all entities are weighted to one and the dissimilarity between two entities is the Euclidean
distance (for PMP and MWP) or the square of the Euclidean distance (for the SSC). From these six
sets of entities, we have constructed a large collection of instances by varying p. In Table 2, we give
the values of p we have considered for each set, and the best MWP solution known associated with
each p.
All the best solutions known have been found during the elaboration of methods presented in
this paper; some have been reported earlier in Hansen, Mladenovic and Taillard (1996) or in Brim-
berg et al. (1997) for P654 and U1060. For P654, we were able to find the same best solution values
reported by Brimberg et al. for p - 60, and to find better values for p > 60; for U1060, we succeeded
in improving all the best solution values with the exception of we got the same value.
The best solutions published in Brimberg et al. were obtained by considering more than 20 different
methods, and running each of them 10 times. This last reference also reports the optimum solution
values of smaller problem instances with 50 and 287 entities. We were able to find all these optimum
solution values with our CLS method. So, we conjecture that many of the solution values given in
Table
2 for the smallest set of entities are optimal. For the larger sets, we think that small improvements
can be obtained.
The aim of Table 2 is to provide new MWP instances and to assert the absolute quality of our
methods: Indeed we think that providing the relative quality (measured in per cent over the best solution
value of Table 2) allows comparisons to be made more easily than providing absolute solution
values. Sometimes, the best solutions known have been found by using sets of parameters for which
results are not reported in this paper and it would be difficult to estimate the effort needed to obtain
each best solution known. Consequently, we do not provide computing times in this table.
Our algorithms are implemented in C++ and run on a Silicon Graphics (SG) 195MHz workstation
with R10000 processor. In order to make fair comparisons with algorithms implemented by
other authors and executed on a different machine, we have sometimes used another computer,
clearly indicated in the tables that follow. It was not possible to report exhaustive numerical results
due to the large number of problem instances (160), problem types (PMP, MWP or SSC) and methods
(CLS, DEC and LOPT). We try to report representative results in a condensed form. However,
let us mention that the conclusions we draw for a given method for a problem type are generally
valid for another problem type.
4 . 2 . ALT and CLS .
First, we want to show the efficiency of our CLS algorithm by comparing it to the results produced
by one of the best methods at the present time for the MWP: MWPM, an algorithm that first
solves exactly a p-median before re-locating optimally the centres in the continuous plane. This
method is due to Cooper (1963) but has been forgotten for a long time before Hansen, Mladenovic
and Taillard (1996) show that in fact, it is one of the most robust for small and medium size MWPs
(see also Brimberg et al., 1997). We do not consider methods such as those of Bongartz, Calamai and
Conn (1994) which are too slow and produce too poor solutions or those of Chen (1983) or Murtagh
and Niwattisyawong (1982) which are not competitive according to Bongartz et al. Also, we do not
compare our results with the HACA algorithm of Moreno, Rodr-gez and Jim-nez (1990) for two
reasons: First the complexity of HACA is O(p 2 n) and requires an O(p 2 ) memory, i. e. O(n 3 ) in time
and O(n 2 ) in memory if are clearly higher than those of our methods. Second,
produces solutions that are not as good as MWPM. Indeed, HACA first builds a heuristic
solution to the p-median instance associated to the MWP and then applies the ALT procedure to the
p-median solution. The reader is referred to Brimberg et al., 1997 for a unified comparison of a large
range of heuristic methods for the MWP.
To show the effects of the improvements of the ALT procedure proposed in this paper, we provide
the best solutions obtained over 100 repetitions of an old version of ALT that starts with different
initial solutions; this method is denoted MALT(100). The results for MALT(100) and MWPM
originate from Hansen, Mladenovic and Taillard (1996). In Table 3, we give the solution quality
(measured in per cent above the solution value given in Table 2) of MWPM, MALT(100), CLS(100)
and CLS(1000) and their respective computing times (seconds on Sun
P654. The computing time of CLS(1000) is roughly 10 times that of CLS(100). We have averaged
all these results for 10 independent runs of the algorithm. Where the 10 runs of CLS(100) find solutions
values identical to those given in Table 2, we provide in brackets the number of iterations
Quality (% above best known) Computing time [s. Sun Sparc 10]
7.3
Table
3: Comparisons of CLS(100) and CLS(1000) with MWPM and MALT(100) for MWP instances P654.
required by the worst run of CLS out of 10 to find the best solution known. From this table, we can
conclude that:
- The new ALT procedure runs 6 to 9 times faster than the old one (both MALT(100) and
call 100 times an ALT procedure, the old one for MALT, the new one for CLS).
provides much better solutions than MALT(100).
- CLS(1000) provides better solutions than MWPM, in a much shorter computing time.
As p grows, the solution quality of all the algorithms diminishes.
4 . 3 . Decomposition methods DEC and LOPT.
As LOPT requires an initial solution in input, we indicate the performances of LOPT when
applied to the solution produced by the DEC procedure.
Table
4 compares CLS(1000), DEC(20, 50), LOPT(10, 50) and 3 VNS variants (due to Hansen
and Mladenovic (1997)) for SSC instances built on entities set U1060. This table gives: The best
solution value known (found with our methods), the solution quality of the methods (per cent over
best known; VNS results originate from Hansen and Mladenovic), and their respective computing
times (seconds on Sun workstation). The computing time of VNS1 and VNS2 is 150 seconds
for all instances. VNS3 corresponds to the best over ten executions of VNS2; therefore, its
computing time is 1500 seconds. It is shown in Hansen and Mladenovic that all VNS variants are
more efficient than other methods of the literature, such as the k-means algorithm of Hartigan
(1975). For LOPT, we do not take into consideration the computing time of DEC to obtain the initial
solution. From this table, we can conclude:
- For small values of p, CLS provides better solutions than DEC, VNSs.
- For the largest values of p, DEC produces fairly good solutions and their quality seems not to
decrease as p increases.
value known
Quality (% above best known) Computing time [s. Sparc 10]
CLS DEC LOPT VNS1 VNS2 VNS3 CLS DEC LOPT
50 255509536.2 0.33 7.54 0.45 30.65 1.97 0.54 114.8 10.8 44.1
90 110456793.7 1.04 7.45 0.49 46.08 1.52 0.78 122.4 12.0 25.2
100 96330296.40 1.14 7.29 0.44 44.51 2.23 1.06 125.2 12.3 24.7
Table
4: Comparison of CLS(1000), DEC(20, 50), LOPT(10, 50) and various VNSs for SSC instances U1060.
- The solution quality of LOPT is always very good and seems to be somewhat correlated with the
initial solution quality (obtained here with DEC).
- Unexpectedly, the computing time of LOPT and DEC diminishes as p increases; this is undoubtedly
due to the small number of entities of U1060.
produces better solutions than VNSs in a much lower computation
time.
Table
5 show the effect of the parameters of DEC and LOPT by confronting DEC(20, 50),
DEC(20, 200), LOPT(10, 50) (starting with the solution obtained by DEC(20, 50)) and
(starting with the solution obtained by DEC(20, 200)). This table provides the solution
quality and the computing times (seconds on SG) for the MWP instance CH2863; all the results
are averaged over 10 runs. From this table, we can conclude:
- For small values of p, the quality of DEC(20, 200) is slightly better than DEC(20, 50) but the
computing times are much higher.
Starting with solutions of similar quality, LOPT(10, 50) and LOPT(10, 200) produces solutions
of similar quality but the computing time of LOPT(10, 200) is much higher.
- For larger values of p, the quality of DEC(20, 50) slightly decreases but the quality of
remains almost constant.
- The computing times of DEC and LOPT diminishes as p increases.
greatly improves the solution quality obtained by DEC.
- The methods seems to be very robust since they provide good results for instances with a very
irregular distribution of dissimilarities.
Quality [%] Computation time [s. on SG]
20, 50 20, 200 10, 50 10, 200 20, 50 20, 200 10, 50 10, 200
100 3.4 3.2 0.28 0.20
28 160 56 183
170 4.4 4.1 0.24 0.14 22 126 43 163
190 4.8 4.3 0.34 0.20 24 132 45 157
43 149
300 5.3 4.3 0.47 0.20
400 5.1 4.0 0.63 0.38 17 103 36 108
500 4.8 3.4 1.02 0.39
700 5.8 3.7 1.48 0.52 19 106
900 6.6 4.0 1.59 0.71
1000 7.1 4.7 2.18 1.10 20 108 33 69
Table
5: Quality and computing time of DEC and LOPT for different parameter settings for MWP instance CH2863
In tables 6 and 7, we compare DEC + LOPT to a fast variant of VNS, called RVNS, for SSC and
PMP instances built on entities set Pcb3038. RVNS results originate from Hansen and Mladenovic
(1997). We have adapted the LOPT parameters in order to get comparable computation times. For all
SSC, PMP and MWP instances, we succeeded in improving the best solutions published in this last
reference. In Table 6, we can see that DEC + LOPT is able to find better solutions than RVNS in
shorter computation times. For the PMP, RVNS seems to be faster than DEC and LOPT for the
smallest number of centres. However, let us mention that our implementation derives directly from
the MWP one and is not optimized for the PMP. For example we do not compute the distances only
once at the beginning of the execution and store them in a (very large) matrix. For large number of
centres, DEC + LOPT is again faster and better than RVNS.
In
Table
8, we provide computational results for our methods DEC(20, 50), DEC(20, 200) and
LOPT(7, 50) (applied to the solution obtained with DEC(20, 200)) for MWP instances Brd14051
and Pla85900. We give the following data in this table: The number n of entities, the number p of
centres, the solution quality obtained by DEC and LOPT (per cent over best known), the respective
computing times (seconds on SG), the proportion of sub-problems solved by DEC and LOPT. For
DEC, this proportion corresponds to the number of subproblems solved divided by . For
LOPT, this proportion corresponds to the number of subproblems solved divided by p. The results
are averaged for 5 runs for Brd14051 and the methods were executed only once for Pla85900. From
Table
8, we can conclude that:
- The solution quality provided by DEC slightly decreases as p increases; this is due mainly to the
decrease in the solution quality provided by the CLS procedure when solving the subproblems.
Quality [%] Time [s. Sparc 10]
200 21885997.1 2.44 5.55 0.90 159.9 48 44
300 13290304.8 2.50 7.22 1.44 229.3
400 9362179.2 3.35 7.56 1.70 165.0 43 27
500 7102678.4 2.85 7.47 1.73 204.4 43 25
Table
Comparison of DEC(20, 50), LOPT(6, 40) and RVNS for SSC instance Pcb3038.
Quality [%] Time [s. Sparc 10]
200 238432.02 1.23 4.12 0.74 107.6 106 187
500 135467.85 0.88 4.01 0.71 209.7 59 81
Table
7: Comparison of DEC(20, 50), LOPT(7, 50) and RVNS for PMP instance Pcb3038.
- The computing times of DEC and LOPT diminishes as p increases. However, we can observe an
increase in DEC computation times - as predicted by the complexity analysis - only for very
large values of p. For LOPT we cannot observe such an increase, meaning that solving the sub-problems
takes more time than finding close centres for generating the subproblems.
- The solution quality of LOPT is very good, generally well below 1% over the value of the best
solution known.
- The proportion of subproblems solved by LOPT diminishes as p increases, showing that g is
smaller than 1.
- The proportion of subproblems solved by DEC seems to be constant, as assumed in the complexity
analysis of section 3.2.
5. CONCLUSIONS
In this article we have proposed three new methods for heuristically and rapidly solving centroid
clustering problems. First, we propose CLS, a candidate list search that rapidly produces good solutions
to problems with a moderate number p of centres. Second, we propose LOPT, a procedure that
locally optimizes the quality of a given solution. This method notably reduces the gap between the
initial solution and the best solution known. The third method proposed, DEC, is based on decomposing
the initial problem into subproblems. DEC and LOPT are well adapted to solve very large
Quality [%] Time [s. SG] Proportion
(7, 50)100 2.4 2.38 0.39 458 1931 3109 4.4 4.4 4.4
200 1.9 2.03 0.30 336 1379 1632 4.3 4.6 3.3
300 2.2 2.08 0.30 252 1073 1055 5.0 5.0 2.6
400 2.2 2.04 0.26 214 915 885 4.9 5.2 2.4
500 2.5 2.12 0.27 195 909 838 4.9 5.8 2.5
2.4 1.99 0.26 184 799 707 5.1 5.8 2.3
700 2.5 1.93 0.23 177 829 632 5.7 6.8 2.2
800 2.5 1.97 0.24 149 760 555 4.8 6.8 2.1
900 2.6 2.00 0.26 146 736 505 6.2 7.3 2.1
1000 3.0 2.17 0.31 129 667 437 4.6 6.6 2.0
2000 4.0 2.81 0.59 88 379 227 4.7 4.8 1.8
3000 4.7 3.65 1.15 82 347 173 4.6 4.9 1.8
5000 4.4 3.59 1.28 73 309 123 4.3 4.5 1.51000 1.78 1.53 0.09 3557 9415 7634 4.2 5.2 2.9
1500 1.97 1.70 0.17 3149 7885 5343 4.3 5.7 2.8
2000 1.81 1.46 0.12 2819 6923 4750 4.1 5.1 2.5
3000 1.74 1.30 0.10 2405 5959 4532 4.1 5.2 2.6
5000 1.87 1.41 0.00 2597 5214 3423 4.1 4.3 2.0
7000 2.03 1.50 0.13 2276 4770 2526 4.3 4.3 1.9
8000 2.07 1.53 0.03 2681 4685 2344 4.1 4.6 2.0
9000 2.55 1.67 0.16 2796 4658 1992 4.1 4.3 1.8
10000 2.78 1.80 0.16 2629 4863 1813 5.1 4.5 1.8
15000 3.71 2.61 0.58 3144 5242 1552 5.3 6.0 1.8
Table
8: Computational results for DEC and LOPT for MWP instances Brd14051 and Pla85900.
problems since their computing time increases more slowly with the number of entities than that of
other methods in the literature. These methods can solve problems whose size is many order of magnitude
larger than the problems treated up to now. Despite its speed, they produce solutions of good
quality. The expected complexity of these procedures are given and experimentally verified on very
large problem instances.
In fact, LOPT is a general optimization method that can be considered as a new meta-heuristic.
Indeed it can be adapted for solving any large optimization problem that can be decomposed into
independent sub-problems. LOPT has been shown to be very efficient for centroid clustering problems
and vehicle routing problems. Future works should consider to apply LOPT to other combinatorial
optimization problems.
The success of the methods presented in this paper could be explained as follows: solving problems
with a very limited number of centres (e. g. below 15) is generally an easy task. Thanks to the
use of an adequate neighbourhood, the CLS method allows problems up to 50-70 centres to be
treated in a satisfactory way. DEC treats the problem at a high level and is able to determine the general
structure of good solutions involving a large number of centres. Starting with a solution that has
a good structure, LOPT is able to find very good solutions using a very simple improving approach.
Therefore, it is interesting to remark that a very simple improving scheme can lead to a very efficient
method if an initial solution with a good structure can be identified and an efficient neighbourhood is
used. Indeed, the quality of the solutions obtained by the DEC + LOPT method rival what one would
expect from a more elaborate meta-heuristic such as a genetic algorithm, taboo search or simulated
annealing. The use of inadequate neighbourhood structures can explain the poor performances of
previous implementation of such meta-heuristics. In summary, we can say that our methods open
new horizons in the solution of large and hard clustering problems.
Acknowledgements
The author would like to thank Fred Glover for suggesting numerous improvements, Ken Rosing
and Nicki Schraudolf for their constructive comments. This research was supported by the Swiss
National Science Foundation, project number 21-45653.95.
6.
--R
"A Note on Solving Large p-Median Problems"
"Identification of transshipment center locations"
"A projection method for l p norm location-allocation prob- lems"
"Improvements and Comparison of Heuristics for Solving the Multisource Weber Problem"
"Solution of minisum and minimax location-allocation problems with Euclidean distances"
"Location-allocation problems"
Network and Discrete Location
"Evaluation of a branch and bound algorithm for clustering"
"A quantitative model to plan regional health facility systems"
"A Simple Heuristic for the p-Centre Problem"
"A Method for Cluster Analisis"
"A dual-based procedure for uncapacitated facility location"
"Solving a large scale districting problem: a case report"
"Location-Allocation for Small Computers"
Classification: Methods for the Exploratory Analysis of Multivariate Data
"Tabu Search for the p-Median Problem"
"Optimum distribution of switching centers in a communication network and some related graph theoretic problems"
"An approach for solving a real-world facility location problem using digital map"
"A comparison of two dual-based procedures for solving the p-median prob- lem"
"Heuristic solution of the multisource Weber problem as a p-median problem"
"An introduction to Variable Neighborhood Search"
Clustering Algorithms
"Multidimensional Group Analysis"
"A Branch and bound clustering algorithm"
Extensions du probl-me de Weber
"Optimum producer services location"
"Some Methods for Classification and Analysis of Multivariate Observations"
Discrete Location Theory
"Heuristic cluster algorithm for multiple facility location-allocation problem"
"An efficient method for the multi-depot location-allocation problem"
Cours d'informatique g-n-rale
"TSPLIB95"
"An efficient tabu search procedure for the p-Median Prob- lem"
"An Optimal Method for Solving the (Generalized) Multi-Weber Porblem"
"The p-Median and its Linear Programming Relaxation: An Approach to Large Problems"
"Optimum positions for airports"
Dissection and Analysis (Theory
"Parallel iterative search methods for vehicle routing problems"
"A reverse elimination approach for the p-median problem"
"Hierarchical Grouping to Optimize an Objective Function"
"Sur le point pour lequel la somme des distances de n points donn-s est minimum"
"The Weber problem: history and perspectives"
"A fast algorithm for the greedy interchange for large-scale clustering and median location problems"
--TR
--CTR
Hongzhong Jia , Fernando Ordez , Maged M. Dessouky, Solution approaches for facility location of medical supplies for large-scale emergencies, Computers and Industrial Engineering, v.52 n.2, p.257-276, March, 2007
Mauricio G. C. Resende , Renato F. Werneck, A Hybrid Heuristic for the p-Median Problem, Journal of Heuristics, v.10 n.1, p.59-88, January 2004
Teodor Gabriel Crainic , Michel Gendreau , Pierre Hansen , Nenad Mladenovi, Cooperative Parallel Variable Neighborhood Search for the p-Median, Journal of Heuristics, v.10 n.3, p.293-314, May 2004 | multi-source Weber problem;sum-of-squares clustering;p-median;clustering;location-allocation |
608327 | Satellite Image Deblurring Using Complex Wavelet Packets. | The deconvolution of blurred and noisy satellite images is an ill-posed inverse problem. Direct inversion leads to unacceptable noise amplification. Usually the problem is regularized during the inversion process. Recently, new approaches have been proposed, in which a rough deconvolution is followed by noise filtering in the wavelet transform domain. Herein, we have developed this second solution, by thresholding the coefficients of a new complex wavelet packet transform; all the parameters are automatically estimated. The use of complex wavelet packets enables translational invariance and improves directional selectivity, while remaining of complexity O(N). A new hybrid thresholding technique leads to high quality results, which exhibit both correctly restored textures and a high SNR in homogeneous areas. Compared to previous algorithms, the proposed method is faster, rotationally invariant and better takes into account the directions of the details and textures of the image, improving restoration. The images deconvolved in this way can be used as they are (the restoration step proposed here can be inserted directly in the acquisition chain), and they can also provide a starting point for an adaptive regularization method, enabling one to obtain sharper edges. | Introduction
The problem presented here is the reconstruction of a satellite image from blurred and
noisy data.
The degradation model is represented by the equation :
Y is the observed data and X the original image. N is additive noise and is assumed
to be Gaussian, white and stationary. The # represents a circular convolution. The Point
Spread Function (PSF) h is positive, and possesses the Shannon property. We deal with
a real satellite image deblurring problem, proposed by the French Space Agency (CNES).
This problem is part of a simulation of the future SPOT 5 satellite. The noise standard
deviation # and the PSF h are assumed known.
The deconvolution problem is ill-posed because of the noise, which contaminates the
data. The inversion process strongly amplifies the noise if no regularization is done.
Thus, we have to deconvolve the observed image while recovering the details, but without
amplifying the noise.
The process used to estimate X from the degraded data Y must preserve textures, to
enable the results to be visually correct. Moreover, the noise should remain small in
homogeneous regions. Many methods have been proposed for regularizing this problem
by introducing a priori constraints on the solution [4], [9], [23]. However, most of them do
not preserve textures, since these textures are not taken into account in the regularizing
model. To achieve a better deconvolution, other authors [1], [19], [21] prefer to use a
wavelet-based regularizing function. But all these approaches have the drawback that
they are iterative, which means that they are time consuming and not always appropriate
for deconvolving satellite data, where the images can be very large.
A few authors, such as Donoho et al. [6], and Mallat and Kalifa [16], proposed denoising
the image after a deconvolution without regularization. The images are represented using
a wavelet or wavelet packet basis, and the denoising process is done in this basis. This
method is not iterative and provides a very fast implementation.
A simple inversion of the observation equation (1) in frequency space gives an unacceptably
noisy solution. To denoise it, a compact representation has to be chosen, in order
to separate the signal from the noise as well as possible. A representation is said to be
compact if it approximates the signal with a small number of parameters, which can be
the coe#cients of the decomposition in a given basis.
The noise amplified by the deconvolution process is colored. Furthermore, the coe#-
cients of this noise are not independent in the wavelet basis. Thus, the basis must adapt
to the covariance properties of the noise. The covariance should be nearly diagonal [15]
w.r.t. the basis, to decorrelate the noise coe#cients as much as possible. The Fourier basis
achieves such a diagonalization, but the energy of the signal is not concentrated over a
small number of coe#cients (the basis vectors are not spatially localized), so the Fourier
transform is not suitable for any thresholding method.
A good compromise is to use a wavelet packet basis [5], since it nearly realizes the
two following essential conditions, i.e. the signal representation is sparse, and the noise
covariance operator is nearly diagonalized [15].
Many types of wavelets transforms can be used to construct a packet basis ; they exhibit
di#erent properties depending on their spatial or frequency localization, and on their
separability w.r.t. rows and columns. Decimated real wavelet transforms are e#cient
for satellite image deconvolution but produce artefacts since the transform is not shift
invariant. To avoid these artefacts, the resulting image has to be averaged over all possible
integer translations. It is also possible to use shift invariant transforms, but the redundancy
is generally very high and depends on the depth of the transform. Anyway, these two
techniques have the major drawback of slowing down the algorithm.
The main motivation of our work is to solve this problem in a computationally e#cient
manner. There is a way to enable translation invariance without much loss of computational
time, by using complex wavelets [17], [18]. Such wavelets also provide a better
restoration by separating 6 directions, while real separable wavelets only take into account
two directions. In order to achieve the necessary near-diagonalization of the deconvolved
noise covariance, we have implemented a complex wavelet packet algorithm.
Our essential contributions are the following
1. We have designed a new transform, the complex wavelet packet transform, which has
better directional selectivity than the complex wavelet transform, while exhibiting the
same shift and rotational invariance properties.
2. The proposed algorithm is fully automatic, since it is based on a Bayesian approach,
where all the necessary parameters are estimated by Maximum Likelihood.
3. We have proposed a new hybrid technique, consisting of combining two di#erent meth-
ods, regularization and wavelet thresholding, to obtain optimal deconvolution results.
4. It performs the inversion much faster than shift invariant real transforms and reconstructs
features of various orientations better.
The paper is organized as follows. First, in section II, we detail how to compute the
proposed complex wavelet packet transform and the properties of this new transform.
Then, in section III, we present the Bayesian thresholding framework used to estimate the
unknown coe#cients from the observed data, and explain how to compute the variance
of the deconvolved noise. Sections III-A to III-C are devoted to the presentation of the
di#erent prior models put on the wavelet coe#cients - homogeneous, noninformative and
inhomogeneous priors. In section III-C.2, we detail the hybrid technique used to estimate
the adaptive parameters of the latter. This model is used in the algorithm proposed in
section IV. Finally, we present, in section V, a comparison with classical algorithms used
for satellite image deconvolution, to demonstrate the superiority of the proposed method.
II. Complex wavelet packets
To build a complex wavelet transform, Kingsbury [17] has developed a quad-tree al-
gorithm, by noting that an approximate shift invariance can be obtained with a real
biorthogonal transform by doubling the sampling rate at each scale. This is achieved by
computing 4 parallel wavelet trees, which are di#erently subsampled. Thus, the redundancy
is limited to 4, compared to real shift invariant transforms.
The shift invariance is perfect at level 1, and approximately achieved beyond this level :
the transform algorithm is designed to optimize the translation invariance. Therefore, it
involves two pairs of biorthogonal filters, odd, h o and g o , and even, h e and g e . At level
simply a non-decimated wavelet transform (using h
are re-ordered into 4 interleaved images by using their parity. This defines the 4 trees
B, C and D. For j > 1, each tree is processed separately, as a real transform, with
a combination of odd and even filters depending on each tree. The transform is achieved
by a fast filter bank technique, of complexity O(N ).
We have extended the original transform by applying the filters h and g on the detail
subbands, thus defining a complex wavelet packet (CWP) transform [13], [14]. This
new transform exhibits the same invariance properties as the original complex wavelet
transform. The tree corresponding to this transform is given in Fig. 1.
The filter bank used for the decomposition is illustrated by Fig. 2, on which the subbands
are indexed by (p, q) for each tree T . The impulse responses, shown in Fig. 3, and
the related partitioning of the frequency space given in Fig. 4, demonstrate the ability
to separate up to 26 directions for the chosen decomposition tree. Compared to real
separable transforms, which only define two directions (rows and columns), it provides
near rotational invariance and gives a selectivity which better represents strongly oriented
textures (thus separating them better from the noise). Furthermore, compared to the
original complex wavelet transform, which only separates 6 directions, the directional
selectivity is higher. Instead of dividing an approximation space which does not define any
new orientation, the wavelet packet decomposition processes the detail subbands, which
are strongly oriented. Each detail subband at level 1 isolates an area of the frequency
space defined by a mean direction and a dispersion, enabling one to select a range of
directions around an orientation #. If the subband is decomposed into 4 new subbands,
it means that the corresponding frequency area splits into 4 new areas, which can define
new orientations, as shown in Fig. 4.
This is not really a complex transform, since it is not based on a continuous complex
mother wavelet. Nevertheless, the quad-tree transform has in practice the same properties
as a complex transform w.r.t. shifting of the input image, and it is perfectly invertable
and computationally e#cient. The complex coe#cients are obtained by combining the
di#erent trees together. If we index the subbands by k, the detail subbands d j,k of the
parallel trees A, B, C and D are combined to form complex subbands z j,k
and z j,k
- , by a
linear transform (represented by a matrix M ), in the following way :
z j,k
A - d j,k
z j,k
Thresholding the magnitudes |z - | without modifying the phase enables us to define a
nearly shift invariant filtering method.
The reconstruction is done in each tree independently, by using the dual filters, and the
results of the 4 trees are averaged to obtain a 0 to ensure the symmetry between the trees,
thus enabling the desired shift invariance.
III. Bayesian thresholding
Let us denote X the deconvolved image without regularization. For each subband k,
the variables x and # denote one of the CWP coe#cients corresponding respectively to the
observation Y and the original image X . We suppose that H is invertible (i.e. it has no
zeros in the Fourier space). Since the noise N is white and Gaussian, Equ. (1) multiplied
by H -1 gives, in the CWP domain :
To obtain the expression of the noise coe#cients n, let us recall Equ. (2). Each complex
coe#cient is obtained by summing or subtracting the coe#cients of the 4 trees A, B, C,
D. If we compute the covariance between the real and imaginary parts, we find (for the
coe#cients z are the
noise coe#cients for each tree. By symmetry assumptions, this covariance is null, since all
covariances E[n T n T # ] between di#erent trees T and T # are equal. Then, the distribution
of the noise is defined as a joint distribution of (n r ,
We assume that the coe#cients # are independent in a given subband, between di#erent
subbands of a given scale, and also between scales. This is an approximation which
enables a fast thresholding technique: we will not handle here possible correlations between
subbands or neighbour coe#cients. The covariance matrix of the noise is supposed to be
nearly diagonal in the chosen basis (see Fig. 6 and Fig. 7), so we consider that the noise
variables are also independent in the wavelet packet basis.
We assume that the noise variance is constant in each subband k. We compute # k by
considering the undecimated transform of the noise N , which is performed by a linear
operator (convolution with the impulse response w k , obtained by an inverse CWPT
of a Dirac). F denotes the Fourier transform.
We estimate the unknown coe#cients # within a Bayesian framework [22]. We have
demonstrated in [14] that this approach provides slightly better results than the Minimax
risk calculus on real satellite data. To compute the MAP estimate of #, we use Bayes
law to calculate the expression of the posterior probability :
where P (x | #) is given by the distribution of the noise :
A. Homogeneous prior model
Generalized Gaussian distributions have been used to model real wavelet coe#cients
[20], [22]. We also propose to use them to model the CWP subbands. We have the prior
probability of # :
where # k is a prior parameter and p k is an exponent. However, we assume that the variables
# are independent within a given subband, and also between the di#erent subbands. As
shown by Fig. 5, the complex density is a bidimensional function which only depends on
the magnitude (it exhibits a radial symmetry). It is generally not separable for p #= 2. We
set a given density function on the magnitude, while the phase is uniformly distributed in
The exponent p k can be set to a fixed value, the same for all subbands, to simplify the
computation. Indeed, if this parameter is not specified it must be estimated. This is quite
complex and is not justified by the improvement of the results.
On large size images, the parameter # k can be computed e#ciently by various methods,
such as Maximum Likelihood for example. We choose to estimate it automatically from
the histogram of a given subband.
Then we obtain the following expression of the MAP, using the prior law
It is possible to demonstrate that it is equivalent to apply a thresholding operator # p k
to the magnitude of each coe#cient [14] :
For complex wavelets, this leads to the following equations :
and
This shows that # is obtained from x by keeping the phase, and thresholding the magnitude.
It reduces to simple soft thresholding if
elsewhere. Thus, we have :
This classical filtering method naturally arises from the Bayesian approach. The resulting
thresholding functions are smoother for is the exponent of the prior law (7)),
and they become linear if 2. If p k < 1, which is more realistic for satellite images, the
functions become discontinuous. Then, the thresholding functions # are numerically
computed by solving equation (10) w.r.t. Fig. 8 shows the behaviour of these functions
for di#erent values of p k . Experimental studies have shown that provides an
e#cient model for satellite images [14].
B. Noninformative Je#rey's prior
It is possible to use a di#erent approach for subband modeling. This approach has
been proposed in [8] within a wavelet based image denoising framework. It is based on
the following assumption : the inference procedure should be invariant under changes of
amplitude and scale. It means that the prior probability law of # must keep the same
behaviour even if it is rescaled, by setting for example this is not the case
for classical Gaussian or Laplacian models, the author uses the following prior, which is
called a noninformative prior
This corresponds to an extremely heavy-tailed distribution, which approximately describes
the original wavelet coe#cients #. Unfortunately it is improper : the resulting posterior
density function is not integrable.
Therefore, an alternative to the fully Bayesian framework is chosen. It consists of
treating the real or imaginary part of each coe#cient as a zero-mean Gaussian variable, of
variance s 2 . This defines an adaptive model, since each coe#cient has a di#erent variance.
Then, this variance is supposed to follow Je#rey's hyperprior distribution, i.e. we have :
This is equivalent to P (# 1/|#| within a Bayesian context. Thus, the model remains
homogeneous, even though an adaptive Gaussian model is used intermediately to address
the problem of the improper distribution.
The estimation is then performed in two steps :
. estimate the variance - s 2 by using the
| x)
. estimate the unknown coe#cient by using the
To express P (s 2
| x), we need P both signal and noise are
Gaussian, of respective variances s 2 and # 2
k , we have P
We also have the hyperprior
(13). Then we
|
which is equivalent to s
The MAP estimate -
for the inhomogeneous Gaussian model gives the
following expression (see the next section for a complete proof) :
By combining equations (14) and (15) we obtain the following estimate, which we call the
noninformative thresholding function # J :
The advantage of such a method is that there is no need for parameter estimation,
since there is no parameter. However, there is a drawback, which is especially visible in
homogeneous areas : the residual noise is quite apparent, its variance being higher than
with the previously presented homogeneous model. This probably comes from the lack of
robustness of the estimation method. We can remark that if we remove the prior law of
the estimation of s 2 is done by the MLE, which gives a function like
but with a threshold 2# 2 instead of 4# 2 . This is insu#cient since the magnitude of
the noise has a variance equal to 2# 2 . Thus, the hyperprior makes the estimation more
robust, by doubling the threshold value. It is still not su#cient to remove noise peaks
in large constant areas. Hereafter, we detail a more robust method to filter the complex
wavelet packet subbands.
C. Inhomogeneous prior model
C.1 The model
The Generalized Gaussian model presented previously has been chosen because it seems
to match correctly the original coe#cient distribution, which is heavy-tailed. Another
possible way to capture this property is to define an inhomogeneous model, which adapts
to the local characteristics of the subbands. Some approaches to spatial adaptivity in the
real wavelet domain can be found in the literature, for example [3].
To simplify, let us choose a Gaussian model. The variance parameter is di#erent for
each coe#cient, which enables us to di#erentiate edges or textures, which have a high
intensity, from the homogeneous areas which generally correspond to very low values of
the coe#cients.
Since the parameters can be very di#erent from one variable to another, the histogram
of a subband can have a heavy-tailed behaviour, even if the distribution of each variable
is Gaussian.
We denote by s 2 the variance of the real or imaginary part of an original coe#cient #,
as before. We have the prior law :
If the parameters of the prior distribution are known, the unknown variables are estimated
by computing the MAP. This is a fully Bayesian technique (we use the property (5)). Recall
the expression of the noise distribution (6), and combine it with Equ. (17) to obtain :
Therefore, the MAP is given by :
To compute the minimum, di#erentiate w.r.t. the real and the imaginary parts of #. This
gives two equations which are recombined to form an equation with complex numbers :
#r +xr
This gives the inhomogeneous MAP estimate :
C.2 Adaptive parameter estimation
The most di#cult problem in this approach is to estimate the adaptive parameters of
the model. As is shown in [12], the MLE is not robust when applied to incomplete data
(i.e. when the estimation is made on noisy data). Indeed, there are as many parameters as
observed data. But the robustness becomes su#cient when the estimation is made from
the original image X . A good approximation of this image is still su#cient to provide
useful parameter estimates. Here, we use this complete data approach to estimate the
variances s 2 .
Consider Equ. (17). The complete data MLE is defined by - s
Assumimg independence, we obtain :
where the factor 2 comes from the dimensionality of the distribution.
We obviously do not have access to the original coe#cients, which is why we take the
transform coe#cients of an approximate original image instead. Experiments have shown
that a satisfactory approximation is provided by a nonlinear regularizing algorithm, such
as RHEA, detailed in [11]. It essentially consists of a variational method based on #-
functions [4] (minimization of a criterion which penalizes noisy solutions, but preserves
edges) preceded by an automatic parameter estimation step to compute the hyperparameters
of the regularizing model.
This method is certainly not perfect and some residual noise remains. It is visible
in constant areas. However, we filter this noise as well as the deconvolved noise by a
thresholding technique. We choose to use the noninformative threshold of the previous
section, because it does not require any additional parameter estimation.
The proposed algorithm consists of obtaining the desired approximate original image
using RHEA [11], filtering the CWPT of the result using Equ. (16), estimating the adaptive
parameters using the complete data MLE with Equ. (20), and then estimating the
unknown coe#cients by computing the MAP by Equ. (19).
In addition to the computation of the deconvolved noise variance, we also need to
compute the variance 2-# 2
k of the residual noise of the approximate image.
Let us denote by -
# the thresholded transform coe#cients of the approximate original
image. -
# is supposed to be su#ciently exempt from noise and to contain su#cient information
to enable texture and edge recovery. Homogeneous areas and edges are fine, since
we have used an edge-preserving method followed by an e#cient noise thresholding. But
we still have to explain why the coe#cients related to textured areas are su#ciently high
to avoid too strong an attenuation by using Equ. (19) in these areas. The variational
method used does not completely remove the textures, and even if they are visually not
very sharp, they are su#ciently present in the approximate image to enable a correct
reconstruction using the method detailed here.
Finally, if -
# is known, by using Equ. (19) and (20), the estimate for the coe#cient is :
| -
If we compute the expression of the thresholding function which minimizes the Bayesian
risk of the estimator -
#, defined by
denotes an expectation w.r.t. the distribution of x, we find :
It has exactly the same form as Equ. (21), if we take the approximate original coe#cient
# instead of #. Both Bayesian and minimum risk methods lead to the same expression,
which means that the chosen model is good, since the corresponding estimator provides
the minimum risk.
As in the case of Wiener filtering [10], computing the MAP under Gaussian assumptions
(both signal and noise have Gaussian distributions) is equivalent to minimizing the risk
of a linear estimator w.r.t. the attenuation factor. Therefore, the two approaches are
equivalent in the Gaussian case.
IV. The deconvolution algorithm
The adaptive model described by Equ. (17) provides much better results (visually and
w.r.t. SNR) than the homogeneous model described by Equ. (7). Details are better
preserved and constant areas are cleaner. That is why we keep this model for the final
version of the proposed algorithm. We have also compared this scheme with the classical
approach [6] and with minimum risk computation for various models [14]. It
consistently exhibits better results.
The initial deconvolution is made in the Cosine Transform space instead of the Fourier
space to avoid artefacts near the borders of the image. The proposed algorithm is called
COWPATH, for COmplex Wavelet Packet Automatic THresholding, and consists of the
following steps (see Fig.
. DCT (Discrete Cosine Transform) of the observation Y
. Deconvolution : divide by F [h] (in practice, divide by F [h] is small, since
some of the coe#cients of F [h] can be null)
. Inverse DCT of the result, which gives X
. CWP transform of X
. Computation of the approximate original image -
apply the RHEA algorithm [11] on
Y (nonlinear regularization, with automatic parameter estimation)
. CWP transform of -
. Computation of # k using the known h and # (see Equ. (4))
. Computation of - # k (residual noise on the approximate original image) using the known
h and # (see [14] for details)
. Thresholding of the approximate image coe#cients using Je#rey's noninformative prior
and -
. Estimation of the parameters - s of the inhomogeneous Gaussian model (see Equ. (20))
. Coe#cient thresholding by computing the MAP (see Equ. (21))
. Inverse CWP transform, which gives the estimate -
X.
The variance of the residual noise in homogeneous areas, which is needed to denoise the
approximate original image before using this image for parameter estimation, is computed
in the same manner as the variance of the deconvolved noise (i.e. using an equation
similar to Equ. (4)). For this computation, we assume that constant regions correspond
to a quadratic regularization. Then it is possible to use sums in the Fourier space. We
refer to [14] for more details.
The main characteristic of this algorithm is the use of two di#erent methods, regularization
and wavelet thresholding, to obtain a hybrid technique whose results are better than
the results of a single regularization method or a wavelet thresholding. The more decorrelated
the deconvolved noise and the residual noise of the approximate original image, the
higher the quality of the deconvolved image.
It is possible to replace the nonquadratic regularizing model of the RHEA algorithm by
a simple quadratic model. The advantages are to enable a single step deconvolution in the
frequency space and to make the parameter estimation step deterministic and fast (in the
nonquadratic case we need a MCMC method [11]). The edges are not as sharp as with
the nonquadratic model, but this approximate image is su#ciently accurate to provide
a correct estimation of the inhomogeneous parameters of the subband model (the SNR
di#erence between the original and accelerated algorithms is about 0.1 dB for the SPOT
5 simulation image shown in Fig. 13).
The complexity of the algorithm is 115 log (operations per pixel) (1980
op/pix for a 512 - 512 image). If we use the accelerated version, based on the previously
described quadratic model, it is 15 log
image, which represents about 4 s on a Pentium II 400 MHz machine).
V. Satellite image deblurring
A. simulation
Fig. 10 shows a 128x128 area extracted from the original image of N- mes (SPOT 5
simulation at 2.5 m resolution, provided by the French Space Agency (CNES)). Fig. 11
shows the PSF and Fig. 12 shows the observed image Y
Fig. 13 shows the image deconvolved with the proposed algorithm.
B. Comparison with di#erent methods
1. Quadratic regularization [23]. This is nearly equivalent to the parametric Wiener filter
which gives the same results. It is also equivalent to isotropic di#usion. The edges
are filtered as well as the noise, as seen on Fig. 14. It is therefore impossible to obtain
sharp details and noisefree homogeneous areas at the same time. Thus, the SNR remains
low (about 19.7 dB) because of insu#cient noise removal in these areas.
2. Nonquadratic regularization [4], [9]. The resulting image of the RHEA algorithm [11]
exhibits sharp edges, compared to the result shown previously. However, some noise
remains in homogeneous regions and textures are attenuated. The SNR is 22.0 dB. This
result is used as the approximate original image and is illustrated by Fig. 15.
3. Real wavelet packets [15]. The proposed complex wavelet packet transform is more than
two times faster than the shift invariant real wavelet packet transform (based on Symmlet-4
wavelets) and is much more directionally selective. Real wavelet packet thresholding gives
a SNR equal to 21.8 dB. See Fig. 16 for an illustration.
4. The proposed method. This is faster than the other methods, and provides the highest
SNR (22.2 dB). The textures and the oriented features are sharp and regular, while the
homogeneous regions remain noise free, as seen on Fig. 13.
VI. Conclusion
We have proposed a new complex wavelet packet transform to make an e#cient satellite
image deconvolution algorithm. This transform exhibits better directional and shift
invariance properties than real wavelet packet transforms, for a lower computational cost.
The proposed deconvolution method is superior to other competing algorithms on satellite
images : it is faster, more accurate and fully automatic. The essential novelty of
the proposed algorithm consists of an hybrid approach, in which two radically di#erent
methods are combined to produce a deconvolved image of higher quality than the result
of each method individually. Finally, if a quadratic model is used, the speed is greatly
increased, which opens the path to real time processing of image sequences or to onboard
image processing in satellites by using specialized chips.
Furthermore, this new type of approach can be extended and the results can be improved
to handle more di#cult cases (di#erent types of blur and higher noise variance). It is
possible to take into account various levels of noise and di#erent convolution kernels.
Indeed, the wavelet packet decomposition tree is not unique and should be adapted to the
noise statistics.
The case of noninvertible blur, such as motion blur, forbids the use of nonregularized
inversion in the Fourier domain. Therefore, in this case, a regularized deconvolution should
replace the rough inversion used in the proposed method.
An improvement of the adaptive Gaussian model could also be provided by choosing a
more accurate distribution, to capture the heavy-tailed distribution of the wavelet packet
coe#cients. For example, adaptive Generalized Gaussian models could be investigated.
Furthermore, including dependence between di#erent scales by means of multiscale hidden
Markov trees could perhaps enable better separation of the small features from the deconvolved
noise. These types of models would certainly better model edges which propagate
across scales and enable their reconstruction more e#ciently.
VII.
Acknowledgements
The authors would like to thank J-er-ome Kalifa (from CMAPX, at Ecole Polytechnique)
for interesting discussions and Nick Kingsbury (from the Signal Processing Group, Dept.
of Eng., University of Cambridge) for the complex wavelets source code and collaboration,
Peter de Rivaz (same institution) for his kind remarks, and the French Space Agency
(CNES) for providing the image of N- mes (SPOT 5 simulation).
--R
Wavelet domain image restoration using edge preserving prior models.
Bayesian Theory.
Spatial adaptive wavelet thresholding for image denoising.
Wavelet analysis and signal processing
Denoising by soft thresholding.
spatial adaptation via wavelet shrinkage.
Bayesian wavelet-based image estimation using noninformative priors
Stochastic Relaxation
Image restoration by the method of least squares.
Restauration minimax et d-econvolution dans une base d'ondelettes miroirs
Wavelet packet deconvolutions.
The dual-tree complex wavelet transform: a new e#cient tool for image restoration and enhancement
"Wavelets: the key to intermittent information?"
A theory for multiresolution signal decomposition: the wavelet representation.
A wavelet regularization method for di
Bayesian inference in wavelet based methods
Regularization of incorrectly posed problems.
--TR | complex wavelet packets;satellite and aerial images;deblurring;bayesian estimation |
608350 | PAC-Bayesian Stochastic Model Selection. | PAC-Bayesian learning methods combine the informative priors of Bayesian methods with distribution-free PAC guarantees. Stochastic model selection predicts a class label by stochastically sampling a classifier according to a posterior distribution on classifiers. This paper gives a PAC-Bayesian performance guarantee for stochastic model selection that is superior to analogous guarantees for deterministic model selection. The guarantee is stated in terms of the training error of the stochastic classifier and the KL-divergence of the posterior from the prior. It is shown that the posterior optimizing the performance guarantee is a Gibbs distribution. Simpler posterior distributions are also derived that have nearly optimal performance guarantees. | INTRODUCTION
A PAC-Bayesian approach to machine learning attempts to combine the
advantages of both PAC and Bayesian approaches [20, 15]. The Bayesian
approach has the advantage of using arbitrary domain knowledge in the
form of a Bayesian prior. The PAC approach has the advantage that one
can prove guarantees for generalization error without assuming the truth of
the prior. A PAC-Bayesian approach bases the bias of the learning algorithm
on an arbitrary prior distribution, thus allowing the incorporation of domain
knowledge, and yet provides a guarantee on generalization error that is
independent of any truth of the prior.
PAC-Bayesian approaches are related to structural risk minimization
(SRM) [11]. Here we interpret this broadly as describing any learning algorithm
optimizing a tradeoff between the "complexity", "structure", or "prior
probability" of the concept or model and the "goodness of fit", "description
length", or "likelihood" of the training data. Under this interpretation of
SRM, Bayesian algorithms that select a concept of maximum posterior probability
(MAP algorithms) are viewed as a kind of SRM algorithm. Various
approaches to SRM are compared both theoretically and experimentally by
Kearns et al. in [11]. They give experimental evidence that Bayesian and
MDL algorithms tend to over fit in experimental settings where the Bayesian
assumptions fail. A PAC-Bayesian approach uses a prior distribution analogous
to that used in MAP or MDL but provides a theoretical guarantee
against over fitting independent of the truth of the prior.
Perhaps the simplest example of a PAC-Bayesian theorem is noted in
[15]. Consider a countable class of concepts f 1 , f 2 , f 3 each concept
f i is a mapping from a set X to the two-valued set f0; 1g. Let P be an
arbitrary "prior" probability distribution on these functions. Let D be any
probability distribution on pairs hx; yi with x 2 X and y 2 f0; 1g. We do
not assume any relation between P and D. Define ffl(f i ) to be the error rate
of f i , i.e., the probability over selecting hx; yi according to D that f i (x) 6= y.
Let S be a sample of m pairs drawn independently according to D and define
to be the fraction of pairs hx; yi in S for which f i (x) 6= y. Here "ffl(f i ) is
a measure of how well f i fits the training data and log 1
can be viewed as
the "description length" of the concept f i . It is noted in [15] that a simple
combination of Chernoff and union bounds yields that with probability at
least the choice of the sample S we have the following for all f i .
s
2m (1)
This inequality justifies a concept selection algorithm which selects f to be
the f i minimizing the description-length vs. goodness-of-fit tradeoff in the
right hand side. If there happens to be a low-description-length concept that
fits well, the algorithm will perform well. If, however, all simple concepts fit
poorly, the performance guarantee is poor. So in practice the probabilities
should be arranged so that concepts which are a-priori viewed as likely
to fit well are given high probability. Domain specific knowledge can be used
in selecting the distribution P . This is precisely the sense in which P is
analogous to a Bayesian prior - a concept f i that is likely to fit well should
be given high "prior probability" P (f i ). Note, however, that the inequality
(1) holds independent of any assumption about the relation between the
distributions P and D.
Formula (1) is for model selection - algorithms that select a single
model or concept. However, model selection is inferior to model averaging
in certain applications. For example, in statistical language modeling for
speech recognition one "smoothes" a trigram model with a bigram model
and smoothes the bigram model with a unigram model. This smoothing is
essential for minimizing the cross entropy between, say, the model and a test
corpus of newspaper sentences. It turns out that smoothing in statistical
language modeling is more naturally formulated as model averaging than as
model selection. A smoothed language model is very large - it contains
a full trigram model, a full bigram model and a full unigram model as
parts. If one uses MDL to select the structure of a language model, selecting
model parameters with maximum likelihood, the resulting structure is much
smaller than that of a smoothed trigram model. Furthermore, the MDL
model performs quite badly. A smoothed trigram model can be theoretically
derived as a compact representation of a Bayesian mixture of an exponential
number of (smaller) suffix tree models [18].
Model averaging can also be applied to decision trees that produce probabilities
at their leaves rather than hard classifications. A common method
of constructing decision trees is to first build an overly large tree which
over fits the training data and then prune the tree in some way so as to
get a smaller tree that does not over fit the data [19, 10]. For trees with
probabilities at leaves, an alternative is to construct a weighted mixture of
the subtrees of the original over fit tree. It is possible to construct a concise
representation of a weighting over exponentially many different subtrees
[3, 17, 9].
This paper is about stochastic model selection - algorithms that stochastically
select a model according to a "posterior distribution" on the models.
Stochastic model selection seems intermediate between model selection and
model averaging - like model averaging it is based on a posterior distribution
over models but it uses that distribution differently. Model averaging
deterministically picks the value favored by a majority of models as
weighted by the posterior. Stochastic model selection stochastically picks a
single model according to the posterior distribution. The first main result of
this paper is a bound on the performance of stochastic model selection that
improves on (1) - stochastic model selection can be given better guarantees
than deterministic model selection. Intuitively, model averaging should
perform even better than stochastic model selection. But proving a PAC
guarantee for model averaging superior to the PAC guarantees given here
for stochastic model selection remains an open problem.
This paper also investigates the nature of the posterior distribution providing
the best performance guarantee for stochastic model selection. It is
shown that the optimal posterior is a Gibbs distribution. However, it is
also shown that simpler posterior distributions are nearly optimal. Section
gives statements of the main results of this paper. Section 3 relates these
results to previous work. The remaining sections present proofs.
2 Summary of the Main Results
Formula (1) applies to a countable class of concepts. It turns out that the
guarantees on stochastic model selection hold for continuous classes as well,
e.g., concepts with real-valued parameters. Here we assume a prior probability
measure P on a possibly uncountable (continuous) concept class C and
a sampling distribution D on a possibly uncountable set of instances X . We
also assume a measurable loss function l such that for any concept c and instance
x we have l(c; x) 2 [0; 1]. For example, we might have that concepts
are predicates on instances and there is a target concept c t such that l(c; x)
is We define l(c) to be the expectation
over sampling an instance x of l(c; x), i.e., E xD l(c; x). We let S range over
samples of m instances each drawn independently according to distribution
D. We define " l(c; S) to be 1
x2S l(c; x). If Q is a probability measure
on concepts then l(Q) denotes E cQ l(c) and " l(Q; S) denotes
The notation 8 signifies that the probability over the generation of
the sample S of \Phi(S) is at least 1 \Gamma ffi . For countable concept classes formula
(1) generalizes as follows to any loss function l with
Lemma 1 (McAllester98) For any probability distribution P on a countable
rule class C we have the following.
8
s
As discussed in the introduction, this leads to a learning algorithm that
selects the concept c minimizing the SRM tradeoff in the right hand side
of the inequality. The first main result of this paper is a generalization of
(1) to a uniform statement over distributions on an arbitrary concept class.
The new bound involves the Kullback-Leibler divergence, denoted D(QjjP ),
from distribution Q to distribution P . The quantity D(QjjP ) is defined to
be
dP (c)
. The following is the first main result of this paper and is
proved in section 4.
Theorem 1 For any probability distribution (measure) on a possibly uncountable
set C and any measurable loss function l we have the following
where Q ranges over all distributions (measures) on C.
8
s
Note that the definition of l(Q), namely E cQ l(c), is the average loss
of a stochastic model selection algorithm that makes a prediction by first
selecting c according to distribution Q. So we can interpret theorem 1 as a
bound on the loss of a stochastic model selection algorithm using posterior Q.
In the case of a countable concept class where Q is concentrated on the single
concept c the quantity D(QjjP ) equals
(c) and, for large m, theorem 1
is essentially the same as lemma 1. But theorem 1 is considerably stronger
than lemma 1 in that it handles the case of uncountable (continuous) concept
classes. Even for countable classes theorem 1 can lead to a better guarantee
than lemma 1 if the posterior Q is spread over exponentially many different
models having similar empirical error rates. This might occur, for example,
in mixtures of decision trees as constructed in [3, 17, 9].
The second main result of this paper is that the posterior distribution
minimizing the error rate bound given in theorem 1 is a Gibbs distribution.
For any value of fi 0 we define Q fi to be the posterior distribution defined
as follows where Z is a normalizing constant.
Z
For any posterior distribution Q define B(Q) as follows.
s
The second main result of the paper is the following.
Theorem 2 If C is finite then there exists fi 0 such that Q fi is optimal,
i.e., B(Q fi ) B(Q) for all Q, and where fi satisfies the following.
Unfortunately, there can be multiple local minima in B(Q fi ) as a function
of fi and even multiple local minima satisfying (2). Fortunately, simpler
posterior distributions achieve nearly optimal performance. To simplify the
discussion we consider parameterized concept classes where each concept is
specified by a parameter vector \Theta 2 R n . Let l(\Theta; x) be the loss of the
concept named by parameter vector \Theta on the data point x (as discussed
above). To further simplify the analysis we assume that for any given x we
have that l(\Theta; x) is a continuous function of \Theta. For example, we might take
\Theta to be the coefficients of an nth order polynomial p \Theta and take l(\Theta; x) to
be max(1; ffjp \Theta (x) \Gamma f(x)j) where f(x) is a fixed target function and ff is a
fixed parameter of the loss function. Note that a two valued loss function
can not be a continuous function of \Theta unless the prediction is independent of
\Theta. Now consider a sample S consisting of m data points. These data points
define an empirical loss " l(\Theta) for each parameter vector \Theta. This empirical
loss is an average of a finite number of expressions of the form l(\Theta; x) and
hence " l(\Theta) must be a continuous function of \Theta. Assuming that the prior on
\Theta is given by a continuous density we then get that there exists a continuous
density p( " l) on empirical errors satisfying the following where P (U) denotes
the measure of a subset U of the concepts according to the prior measure
on concepts.
x
The second main result of the paper can be summarized as the following
approximate equation where B(Q ) denotes inf Q B(Q).
This approximate inequality is justified by the two theorems stated below.
Before stating the formal theorems, however, it is interesting to compare
(3) with lemma 1. For a countable concept class we can define c to be the
concept minimizing the bound in lemma 1. For large m, lemma 1 can be
interpreted as follows.
s
Clearly there is a structural similarity between (4) and (3). However, the
two formulas are fundamentally different in that (3) applies to continuous
concept densities while (4) only applies to countable concept classes.
Another contribution of this paper is theorems giving upper and lower
bounds on B(Q ) justifying (3). First we give a simple posterior distribution
which nearly achieves the performance of (3). Define " l as follows.
Define the posterior distribution Q( " l ) as follows where Z is a normalizing
constant.
We now have the following theorem.
Theorem 3 For any prior (probability measure) on a concept class where
each concept is named by a vector \Theta 2 R n and any sample of m instances, if
the loss function l(\Theta; x) is always in the interval [0; 1] and is continuous in
\Theta, the prior on \Theta is a continuous probability density on R n
and the density p( " l) is non-decreasing over the interval
we have the following.
All of the assumptions used in theorem 3 are quite mild. The final assumption
that the density p( " l) is nondecreasing over the interval defining Q( " l )
is justified by fact that the definition of " l implies that for any differentiable
density function p( " l) we must have that the density p( " l) is increasing at the
Finally we show that Q( " l ) is a nearly optimal posterior.
Theorem 4 For any prior (probability measure) on a concept class where
each concept is named by a vector \Theta 2 R n and any sample of m instances,
if the loss function l(\Theta; x) is always in the interval [0; 1] and is continuous
in \Theta, and the prior on \Theta is a continuous probability density on R n , then we
have the following for any posterior Q.
3 Related Work
A model selection guarantee very similar to (1) has been given by Barron
[1]. Assume concepts f 1 , f 2 , f 3 true and empirical error rates ffl(f i )
and "ffl(f i ) as in (1). Let f be defined as follows.
s
For the case of error rates (also known as 0-1 loss) Barron's theorem reduces
to the following.
s
There are several differences between (1) and (5). When discussing (1) I
will take f to be the concept f i minimizing the right hand side of (1) which
is nearly the same as the definition of f in (5). Formula (1) implies the
following.
8
s
Note that (5) bounds the expectation of ffl(f ) while (1) is a large deviation
result - it gives a bound on ffl(f ) as a function of the desired confidence
level ffi. Also note that (1) provides a bound on ffl(f ) in terms of information
available in the sample while (5) provides a bound on (the expectation of)
ffl(f ) in terms of the unknown quantities ffl(f i ). This means that a learning
algorithm based on (1) can output a performance guarantee along with the
selected concept. This is true even if the concept is selected by incomplete
search over the concept space and hence is different from f . No such
guarantee can be computed from (5). If a bound in terms of the unknown
quantities ffl(f i ) is desired, the proof method used to prove (1) yields the
following.
8
Also note that (5), like (1) but unlike theorem 1, is vacuous for continuous
concept classes.
Various other model selection results similar to (1) have appeared in the
literature. A guarantee involving the index of a concept in an arbitrary
given sequence of concepts is given in [12]. A bound based on the index
of a concept class in a sequence of classes of increasing VC dimension is
given in [14]. Neither of these bounds handle an arbitrary prior distribution
on concepts. They do, however, give PAC SRM performance guarantees
involving some form of prior knowledge (learning bias).
Guarantees for model selection algorithms for density estimation have
been given by Yamanishi [21] and Barron and Cover [2]. The guarantees
bound measures of distance between a selected model distribution and the
true data source distribution. In both cases the model is assumed to have
been selected so as to optimize an SRM tradeoff between model complexity
and the goodness of fit to the training data. The bounds hold without any
assumption relating the prior distribution to the data distribution, However,
the performance guarantee is better if there exist simple models that fit
well. The precise statement of these bounds are somewhat involved and
perhaps less interesting than the more elegant guarantee given in formula (6)
discussed below.
Guarantees for model averaging have also been proved. First I will consider
model averaging for density estimation. Let f 1 , f 2 , f 3 be an infinite
sequence of models each of which defines a probability distribution on a set
X . Let P be a "prior probability" on the densities f i . Assume an unknown
distribution g on X which need not be equal to any f i . Let S be a sample of
m elements of X sampled IID according to the distribution g. Let h be the
natural "posterior" density on X defined as follows where Z is a normalizing
constant.
Note that the posterior density h is a function of the sample and hence
is a random variable. Catoni [5] and Yang [23] prove somewhat different
general theorems both of which have as a special case the statement that,
independent of how g is selected, the expectation (over drawing a sample
according to g) of the Kullback-Leibler Divergent D(gjjh) is bounded as
follows.
Again we have that (6) holds without any assumed relation between g and
the prior P . If there happens to be a low complexity (simple) model f i such
that D(gjjf i ) is small, then the posterior density h will have small divergence
from g. If no simple model has small divergence from g then D(gjjh) can
be large. Also not that (6), unlike theorem 1, is vacuous for continuous
model classes. These observations also apply to the more general forms of
appearing in [23] and [5]. Catoni [4] also gives performance guarantees
for model averaging for density estimation over continuous model spaces
using a Gibbs posterior. However, the statements of these guarantees are
quite involved and the relationship to the bounds in this paper is unclear.
Yang [22] considers model averaging for prediction. Consider a fixed
distribution D on pairs hx; yi with x 2 X and y 2 f0; 1g. Consider a
countable class of conditional probability rules f 1 , f 2 , f 3 each
f i is a function from X to [0; 1] where f i (x) is interpreted as P (yjx; f i ).
Consider an arbitrary prior on the models f i and construct the posterior
given a sample S as Q(f i
This posterior on the models
induces a posterior h on y given x defined as follows.
Let g(x) be the true conditional probability P (yjx) as defined by the distribution
D. For any function g 0 from X to [0; 1] define the loss L(g 0 ) as
follows where x D denotes selecting x from the marginal of D on X .
Finally, define ffi i as follows.
For m 2, the following is a corollary of Yang's theorem.
iA
This formula bounds the loss of the Bayesian model average without making
any assumption about the relationship between the data distributions D and
the prior distribution P . However, it seems weaker than (5) or (6) in that it
does not imply even for a finite model class that for large samples the loss
of the posterior converges to the loss of the best model. As with (6), the
guarantee is vacuous for continuous model classes. These same observations
apply to the more general statement in [22].
Weighted model mixtures are also widely used in constructing algorithms
with on-line guarantees. In particular, the weighted majority algorithm
and its variants can be proved to compete well with the best expert on an
arbitrary sequence of labeled data [13, 6, 8, 7]. The posterior weighting
used in most on-line algorithms is a Gibbs posterior Q fi as defined in the
statement of theorem 2. One difference between these on-line guarantees and
theorem 1 is that for these algorithms one must know the appropriate value
of fi before seeing the training data. Since a-prior knowledge of fi is required,
the on-line algorithm is not guaranteed to perform well against the optimal
performing well against the optimal SRM tradeoff requires
tuning fi in response to the training data. Another difference between on-line
guarantees and either formula (1) or theorem 1 is that (1) (or theorem 1)
provides a guarantee even in cases where only incomplete searches over the
concept space are feasible. On-line guarantees require that the algorithm
find all concepts that perform well on the training data - finding a single
simple concept that fits well is insufficient.
The most closely related earlier result is a theorem in [15] bounding the
error rate of stochastic model selection in the case where the model is selected
stochastically from a set U of models under a probability measure that is
simply a renormalization of the prior on U . Theorem 1 is a generalization
of this result to the case of arbitrary posterior distributions.
4 Proof of Theorem 1
The departure point for the proof of theorem 1 is the following where S is
a sample of size m and \Delta(c) abbreviates
Lemma 2 For any prior distribution (probability measure) P on a (possibly
uncountable) concept space C we have the following.
8
4m
Proof: It suffices to prove the following.
4m (7)
Lemma 2 follows from (7) by an application of Markov's inequality. To prove
it suffices to prove the following for any individual given concept.
4m (8)
For a given concept c, the probability distribution on the sample induces a
probability distribution on \Delta(c). By the Chernoff bound this distribution
on \Delta satisfies the following.
It now suffices to show that any distribution satisfying must satisfy
(8). The distribution on \Delta satisfying (9) and maximizing Ee
is the
continuous density f (\Delta) satisfying
which implies
. So we have the following
Z 1e
theorem 1 we consider selecting a sample S. Lemma 2 implies
that with probability at least 1 \Gamma ffi over the selection of a sample S we have
the following.
4m
To prove theorem 1 it now suffices to show that the constraint (10) on
the function \Delta(c) implies the body of theorem 1. We are interested in
computing an upper bound on the quantity S). Note that
\Delta(c). We now prove the
following lemma.
Lemma 3 For fi ? 0, K ? 0, and Q;
we have that if
then
s
Before proving lemma 3 we note that lemmas 3 and 2 together imply
theorem 1. To see this consider a sample satisfying (10) and an arbitrary
posterior probability measure Q on concepts. It is possible to define three
infinite sequences of vectors
the conditions of lemma 3 with
satisfying the following.
By taking the limit of the conclusion of lemma 3 we then get E cQ \Delta(c)
To prove lemma 3 it suffices to consider only those values of i for which
dropping the indices where does not change the value of
enlarging the feasible set by weakening the constraint (10).
Furthermore, if at some point where
the theorem is immediate. So we can assume without loss of generality that
By Jensen's inequality we have (
. So it now
suffices to prove that
This is a consequence
of the following lemma. 1
1 The original version of this paper [16] proved a bound of approximately the form
maximizing
subject to constraint 10. A
Lemma 4 For fi ? 0, K ? 0, and Q;
and
then n
To prove lemma 4 we take P and Q as given and use the Kuhn-Tucker
conditions to find a vector y maximizing
subject to the constraint
(11).
are functions from R n to
R, y is a maximum of C(y) over the set satisfying f 1 (y)
and C and each f i are continuous and differentiable at y, then either
(at y), or there exists some f i with f i (at y), or there
exists a nonempty subset of the constraints f i 1
that positive coefficients 1 such that
Note that lemma 4 allows y i to be negative. The first step in proving
lemma 4 is to show that without loss of generality we can work with a
closed and compact feasible set. For K ? 0 it is not difficult to show that
there exists a feasible point, i.e., a vector y such that
Let C 0 denote an arbitrary feasible value, i.e.,
point y. Without loss of generality we need only consider points y satisfying
1. So we now have a constrained optimization problem
with objective function
set defined by the following
constraints.
version of theorem 1, which is of the form " l(Q; S)+
proved from this bound by an application of Jensen's inequality. The idea of maximizing
i and achieving theorem 1 directly is due to Robert Schapire.
Constraint (12) implies an upper bound on each y i and constraint (13) then
implies a lower bound on each y i . Hence the feasible set is closed and
compact.
We now note that any continuous objective function on a closed and
compact feasible set must be bounded and must achieve its maximum value
on some point in the set. A constraint of the form f(y) 0 will be called
active at y if For an objective function whose gradient is nonzero
everywhere, at least one constraint must be active at the maximum. Since C 0
is a feasible value of the objective function, constraint (13) can not be active
at the maximum. So by the Kuhn-Tucker lemma, the point y achieving the
maximum value must satisfy the following.
Which implies the following.
Since constraint (12) must be active at the maximum, we have the following.
So we get and the following.
Since this is the maximum value of
the lemma is proved.
5 Proof of Theorem 2
We wish to find a distribution Q minimizing B(Q) defined as follows where
the distribution P and the empirical error " l(c) are given and fixed.
s
Letting K be ln(1=ffi)+ln m+2 and letting fl be objective function
can be rewritten as follows where K and fl are fixed positive quantities
independent of Q.
s
To simplify the analysis we consider only finite concept classes. Let P i be
the prior probability of the ith concept and let " l i be the empirical error rate
of the ith concept. The problem now becomes finding values of Q i satisfying
minimizing the following.
s
If P i is zero then if Q i is nonzero we have that D(QjjP ) is infinite. So
for minimizing B(Q) we can assume that Q i is zero if P i is zero and we
can assume without loss of generality that all P i are nonzero. If all P i are
nonzero then the objective function is a continuous function of a compact
feasible set and hence realizes its minimum at some point in the feasible set.
Now consider the following partial derivative.
@
Note that if Q i is zero when P i is nonzero then @D(QjjP )=@Q
This means that any transfer of an infinitesimal quantity of probability mass
to Q i reduces the bound. So the minimum must not occur at a boundary
point satisfying we can assume without loss of generality that
is nonzero for each i where P i is nonzero - the two distributions have
the same support. The Kuhn-Tucker conditions then imply that
rB is in the direction of the gradient of one of the constraints
In all of these cases there must exist a single value such that
for all i we have @B=@Q This yields the following.
Hence the minimal distribution has the following form.
r
This is the distribution Q fi of theorem 2.
6 Proof of Theorems 3 and 4
be the posterior distribution of theorem 3. First we note the
following.
dP (c)
We have assumed that p( " l) is nondecreasing over the interval
1=m]. This implies the following.
We also have that " theorem 3 now follows from the
definition of B(Q).
We now prove theorem 4. First we define a concept distribution U such
that U induces a uniform distribution on those error rates " l with
Let W be the subset of the values " l 2 [0; 1] such that p( " l) ? 0. Let ff denote
the size of W as measured by the uniform measure on [0; 1]. Note that
ff 1. Define the concept distribution U as follows.
The total measure of U can be written as follows.
Z
dU
dP
dP
Z
Hence U is a probability measure on concepts.
Now let Q be an arbitrary posterior distribution on concepts. We have the
following.
dP
dP
dU
This implies the following where the third line follows from Jensen's inequality
s
s
s
min
s
7 Conclusion
PAC-Bayesian learning algorithms combine the flexibility prior distribution
on models with the performance guarantees of PAC algorithms. PAC-
Bayesian Stochastic model selection can be given performance guarantees
superior to analogous guarantees for deterministic PAC-Bayesian model se-
lection. The performance guarantees for stochastic model selection naturally
handle continuous concept classes and lead to a natural notion of an
optimal posterior distribution to use in stochastically selecting a model. Although
the optimal posterior is a Gibbs distribution, it is shown that under
mild assumptions simpler posterior distributions perform nearly as well. An
open question is whether better guarantees can be given for model averaging
rather than stochastic model selection.
Acknowledgments
I would like to give special thanks to Manfred Warmuth
for inspiring this paper and emphasizing the analogy between the
PAC and on-line settings. I would also like to give special thanks to Robert
Schapire for simplifying and strengthening theorem 1. Avrim Blum, Yoav
Freund, Michael Kearns, John Langford, Yishay Mansour, and Yoram Singer
also provided useful comments and suggestions.
--R
Complexity regularization with application to artificial neural networks.
Minimum complexity density estimation.
Learning classification trees.
Gibbs estimators.
Universal aggregation rules with sharp oracle inequali- ties
Warmuth How to use expert advice.
Adaptive game playing using multiplicative weights.
Predicting nearly as well as the best pruning of a decision tree.
An experimental and theoretical comparison of model selection methods.
Results on learnability and the Vapnik-Chervonenkis dimension
The weighted majority algo- rithm
Concept learning using complexity regulariza- tion
Some pac-bayesian theorems
On pruning and averaging decision trees.
An efficient extension to mixture techniques for prediction and decision trees.
A pac analysis of a bayesian estimator.
Learning non-parametric densities in tyerms of finite-dimensional parametric hypotheses
Adaptive estimation in pattern recognition by combining different procedures.
Mixing strategies for density estimation.
--TR
--CTR
Franois Laviolette , Mario Marchand, PAC-Bayes risk bounds for sample-compressed Gibbs classifiers, Proceedings of the 22nd international conference on Machine learning, p.481-488, August 07-11, 2005, Bonn, Germany
Matti Kriinen , John Langford, A comparison of tight generalization error bounds, Proceedings of the 22nd international conference on Machine learning, p.409-416, August 07-11, 2005, Bonn, Germany
Avrim Blum , John Lafferty , Mugizi Robert Rwebangira , Rajashekar Reddy, Semi-supervised learning using randomized mincuts, Proceedings of the twenty-first international conference on Machine learning, p.13, July 04-08, 2004, Banff, Alberta, Canada
Arindam Banerjee, On Bayesian bounds, Proceedings of the 23rd international conference on Machine learning, p.81-88, June 25-29, 2006, Pittsburgh, Pennsylvania
Ron Meir , Tong Zhang, Generalization error bounds for Bayesian mixture algorithms, The Journal of Machine Learning Research, 4, 12/1/2003
Matthias Seeger, Pac-bayesian generalisation error bounds for gaussian process classification, The Journal of Machine Learning Research, 3, p.233-269, 3/1/2003 | gibbs distribution;model averaging;posterior distribution;PAC-Baysian learning;PAC learning |
608351 | Relative Loss Bounds for Temporal-Difference Learning. | Foster and Vovk proved relative loss bounds for linear regression where the total loss of the on-line algorithm minus the total loss of the best linear predictor (chosen in hindsight) grows logarithmically with the number of trials. We give similar bounds for temporal-difference learning. Learning takes place in a sequence of trials where the learner tries to predict discounted sums of future reinforcement signals. The quality of the predictions is measured with the square loss and we bound the total loss of the on-line algorithm minus the total loss of the best linear predictor for the whole sequence of trials. Again the difference of the losses is logarithmic in the number of trials. The bounds hold for an arbitrary (worst-case) sequence of examples. We also give a bound on the expected difference for the case when the instances are chosen from an unknown distribution. For linear regression a corresponding lower bound shows that this expected bound cannot be improved substantially. | Introduction
Consider the following model of temporal-difference learning: Learning
proceeds in a sequence of trials :, where at trial t,
\Gamma the learner receives an instance vector x t 2 R n ,
\Gamma the learner makes a prediction "
\Gamma the learner receives a reinforcement signal r t 2 R.
A pair called an example. The Learner tries to predict the
outcomes y t 2 R. For a fixed discount rate parameter fl 2 [0; 1), y t is
the discounted sum
s=t
of the future reinforcement signals 1 . For example, if r t is the profit of a
company in month t, then y t can be interpreted as an approximation of
the company's worth at time t. The discounted sum takes into account
that profits in the distant future are less important than short term
profits. Note that the outcome y t as defined in (1.1) is well-defined if
the reinforcement signals are bounded.
In an episodic setting one can also define the outcomes y t as finite
discounted sums. This is discussed briefly in Section 7.
A strategy that chooses predictions for the learner is called an on-line
learning algorithm. The quality of a prediction is measured with
the square loss: The loss of the learner at trial t is (y
and the
loss of the learner at trials 1 through T is
We want to compare the loss of the learner against the losses of
linear functions. A linear function is represented by a weight vector
and the loss of w at trial t is (y Ideally we want to
bound the additional loss of the learner over the loss of the best linear
predictor for arbitrary sequences of examples, i.e. we want to bound
for arbitrary T and arbitrary sequences of examples
The first sum in (1.2) is the total loss of the learner at trials 1 through
T . The argument of the infimum is the total loss of the linear function
Alternatively y t could be defined as (1 \Gamma fl)
s=t
rs , which makes y t a
convex combination of the reinforcement signals r t , r t+1 , . The alternate definition
amounts to a simple rescaling of the outcomes and predictions.
Relative Loss Bounds for Temporal-Difference Learning 3
w at trials 1 through T . Thus (1.2) is the additional total loss of the
learner over the total loss of the best linear function.
Following Vovk (1997) we also examine the more general problem of
bounding
for a fixed constant a 0. Here akwk 2 is a measure of the complexity
of w, i.e. the infimum in (1.3) includes a charge for the complexity of
the linear function. For larger values of a it is obviously easier to show
bounds on (1.3).
Bounds on (1.2) or (1.3) that hold for arbitrary sequences of examples
are called relative loss bounds. Relative loss bounds for the
temporal-difference learning setting were first shown by Schapire and
Warmuth (1996). An overview of their results is given in Section 4.
They also show how algorithms that minimize (1.2) can be used for
value function approximation of Markov processes. This is an important
problem in reinforcement learning which is also called policy evaluation.
The Markov processes can have continuously many states which are
represented by real vectors. If one wants to predict state values then
our instances x t correspond to states of the environment, if action
values are predicted then an instance corresponds to both a state of
the environment and an action of the agent. For an introduction to
reinforcement learning see Sutton and Barto (1998).
The paper is organized as follows. We discuss previously known
relative loss bounds for linear regression and for temporal-difference
learning in Sections 3 and 4. In Section 5, we propose a new second
order learning algorithm for temporal-difference learning (the TLS
algorithm), and we prove relative loss bounds for this algorithm in
Section 6. In Section 7, we adapt the TLS algorithm to the episodic
case, where the trials are divided into episodes and where an outcome
is a discounted sum of the future reinforcement signals from the same
We discuss previous second order algorithms for temporal-difference
learning in Section 8, and give lower bounds on the relative
loss in Section 9.
2. Notation and preliminaries
For is the set of n-dimensional real vectors. For m;n 2 N,
R m\Thetan is the set of real matrices with m rows and n columns. In this
paper vectors x 2 R n are column vectors and x 0 denotes the transpose
4 J. FORSTER AND M. K. WARMUTH
of x. The scalar product of two vectors w;x 2 R n is w
the Euclidean norm of a vector x 2 R n is
We recall some basic facts about positive (semi-)definite matrices:
n\Thetan is called positive definite if x
holds for all vectors x 2 R n n f0g.
n\Thetan is called positive semi-definite if
\Gamma The sum of two positive semi-definite matrices is again positive
semi-definite. The sum of a semi-definite matrix and a positive
definite matrix is positive definite.
Every positive definite matrix is invertible.
\Gamma For matrices A; B 2 R n\Thetan we write A B if B \Gamma A is positive
semi-definite. In this case x 0 Ax x 0 Bx for all vectors x 2 R n .
\Gamma The Sherman-Morrison formula (see Press, Flannery, Teukolsky,
holds for every positive definite matrix A 2 R n\Thetan and every vector
For example, the unit matrix I 2 R n\Thetan is positive definite and for every
vector x 2 R n the matrix xx 0 2 R n\Thetan is positive semi-definite.
To find a vector w 2 R n that minimizes the term
that appears in the relative loss (1.3) we define
k=s
Because (2.2) is convex in w, it is minimal if and only if its gradient
is zero, i.e. if and only if A t . If a ? 0, then A t is invertible and
is the unique vector that minimizes (2.2). If a
Relative Loss Bounds for Temporal-Difference Learning 5
A t might not be invertible and the equation A t might not have
a unique solution. The solution with the smallest Euclidean norm is
t is the pseudoinverse of A t . For the definition of
the pseudoinverse of a matrix see, e.g., Rektorys (1994). There it is also
shown how the pseudoinverse of a matrix can be computed with the
singular value decomposition. We give a number of properties of the
pseudoinverse A
t in the appendix.
If a ? 0, then applying the Sherman-Morrison formula (2.1) to A
t shows that
A
A
A
3. Known relative loss bounds for linear regression
First note that linear regression is a special case of our setup since
when t. The standard algorithm
for linear regression is the ridge regression algorithm which predicts
x t at trial t. Relative loss bounds for this algorithm
(similar to the bounds given in the below two theorems) have been
proven in Foster (1991), Vovk (1997) and Azoury and Warmuth (1999).
The bounds obtained for ridge regression are weaker than the ones
proven for a new algorithm developed by Vovk. We will give a simple
motivation of this algorithm and then discuss the relative loss bounds
that were proven for it.
We have seen in Section 2 that the best linear functions for trials 1
through t (i.e. the linear function that minimizes (2.2)) would make the
prediction b 0
x t at trial t. Note that via b t and A t this prediction
depends on the examples However, only the examples
and the instance x t are known to the
learner when it makes the prediction for trial t. If we set the unknown
outcome y t to zero we get the prediction b 0
This prediction
was introduced in Vovk (1997) using a different motivation. The above
motivation follows Azoury and Warmuth (1999). Forster (1999) gives an
alternate game theoretic motivation. Vovk proved the following bound
on (1.3) for his prediction algorithm.
THEOREM 3.1. Consider linear regression
any sequence of examples in R n
with the predictions " y
6 J. FORSTER AND M. K. WARMUTH
a
A T
Y
a
an
where x t;i is the i-th component of the vector x t and where
In Vovk's version of Theorem 3.1 the term X 2
n is replaced by the
larger is a bound on the supremum norms of the
instances . The last inequality in Theorem 3.1 follows from
Y
a
an
-z
an
where the first inequality holds because the geometric mean is always
smaller than the arithmetic mean.
Azoury and Warmuth (1999) and Forster (1999) give the following
refined bound for Vovk's linear regression algorithm. There only the
case a ? 0 is considered. We will show that the case a = 0 of Theorem
3.2 (i) follows from the case a ? 0 by letting a go to zero.
THEOREM 3.2. Consider linear regression
any sequence of examples in R n \Theta R.
(i) If a 0, then with the predictions "
(ii) If a ? 0, then
a
A T
for all vectors x
Relative Loss Bounds for Temporal-Difference Learning 7
Proof. We only have to show that the equality in (i) also holds for
the case a = 0. We do this by showing that both sides of the equality
are continuous in a 2 [0; 1). Because of Lemma A.2 we only have to
check that for t 2 Tg the term
is continuous in a 2 [0; 1). If x t 2 X t\Gamma1 , this again follows from Lemma
A.2. Otherwise x
. Then by Lemma A.3, the expression (3.1) is
zero for a = 0. We have to show that (3.1) converges to zero for a & 0.
For a ? 0, we can rewrite (3.1) by applying (2.5):
(b 0
The factor (b 0
converges because of Lemma A.2. Because of
Lemma A.4, the term x 0
goes to zero as a & 0. Together
this shows that (3.1) indeed goes to zero as a &
The learning algorithm of Theorem 3.1 and Theorem 3.2 is a second
order algorithm in that it uses second derivatives. There is a simpler
first order algorithm called the Widrow-Hoff or Least Mean Square algorithm
(Widrow & Stearns, 1985). This algorithm maintains a weight
vector w t 2 R n and predicts with "
. The weight vector is
updated by gradient descent. That is, w
for some learning rates
A method for setting the learning rates for the purpose of obtaining
good relative loss bounds is given in Cesa-Bianchi, Long and Warmuth
(1996) and in Kivinen and Warmuth (1997). In this method the learner
needs to know an upper bound X on the Euclidean norms of the
instances and needs to know parameters W , K such that there is a
vector w 2 R n with norm kwk W and loss
For any such vector w the bound
holds. As noted by Vovk (1997) this bound is incomparable to the
bounds of Theorem 3.1 (See also the next section). The bound (3.3)
also holds for the ridge regression algorithm (Hassibi, Kivinen, & War-
muth, 1995). There the parameter a is set depending on W and K. We
believe that with a proper tuning of a such bounds also hold for Vovk's
8 J. FORSTER AND M. K. WARMUTH
4. Known relative loss bounds for temporal-difference
learning
For the case that the discount rate parameter fl is not assumed to be
zero, Schapire and Warmuth (1996) have given a number of different
relative loss bounds for the learning algorithm TD
The first order algorithm TD () is essentially a generalization of the
Widrow-Hoff algorithm. It is a slight modification of the learning algorithm
TD() proposed by Sutton (1988). Schapire and Warmuth show
that the loss of TD (1) with a specific setting of its learning rate is
(where c
and that the loss of the algorithm TD (0) with a
specific setting of its learning rate is
for every vector w 2 R n with kwk W and
The setting of the learning rate depends on an upper bound X on the
Euclidean norms of the instances and on W and K. The learner needs
to know these parameters in advance.
The loss of the best linear function will often grow linearly in T , e.g.
if the examples are corrupted by Gaussian noise. In this case the relative
loss bounds in (3.3), (4.1) and (4.2) will grow like
T . The second
order learning algorithm we propose for temporal-difference learning
has the advantage that the relative loss bounds we can prove for it
grow only logarithmically in T . Also, our algorithm does not need to
know parameters like K and W . However, it needs to know an upper
bound Y on the absolute values of the outcomes y t .
TD () can be sensitive to the choice of . Another advantage of
our algorithm is that we do not have to choose a parameter like the
of the TD () algorithm.
5. A new second order algorithm for temporal-difference
learning
In this section we propose a new algorithm for the temporal-difference
learning setting. We call this algorithm the temporal least squares
algorithm, or shorter the TLS algorithm.
Relative Loss Bounds for Temporal-Difference Learning 9
We assume that the absolute values of the outcomes y t are bounded
by some constant Y , i.e.
and assume that the bound Y , the discount rate parameter fl, and
the parameter a are known to the learner. Knowing Y the learner can
"clip" a real number y 2 R using the function
5.1. Motivation of the TLS algorithm
The new second-order algorithm for temporal-difference learning is
given in Table I. We call this algorithm the Temporal Least Squares
(TLS) algorithm. The motivation for the TLS algorithm is the same
as the motivation from Azoury and Warmuth (1999) that we gave for
Vovk's prediction for linear regression in Section 3. We will use the
equality
k=s
that holds for all s t. The best linear function for trials 1 through t
that minimizes (2.2) would make the prediction
k=s
at trial t. We set the unknown outcome y t to zero and get the prediction
e
k=s
The TLS algorithm predicts with "
clipping
function assures that the prediction lies in the bounded range
In the following we will show that the relative loss (1.3) of
TLS is at most
J. FORSTER AND M. K. WARMUTH
Table
I. The temporal least squares (TLS) algorithm.
At trial t, the learner knows
ffl the parameters Y , fl, a,
ffl the instances
ffl the reinforcement signals
TLS predicts with
where
k=s
rk
and CY given by (5.2) clips the prediction to the interval [\GammaY; Y ].
If A t is not invertible, then the inverse A \Gamma1
of A t must be replaced
by the pseudoinverse A
and we will use this result to get worst and average case relative loss
bounds that are easier to interpret.
5.2. Implementation of the TLS algorithm
For the case a ? 0 a straightforward implementation of the TLS algorithm
would need O(n 3 ) arithmetic operations at each trial to compute
the inverse of the matrix A t . In Table II we give an implementation that
only needs O(n 2 ) arithmetic operations per trial. This is achieved by
computing the inverse of A t iteratively using the Sherman-Morrison
formula (2.5).
This implementation makes the correct predictions because at the
end of each FOR-loop
This follows from the Sherman-Morrison formula (2.5) and from the
equality
Relative Loss Bounds for Temporal-Difference Learning 11
Table
II. Implementation of TLS for a ? 0.
A inv := 1
a
I 2 R n\Thetan
z
Receive instance vector x t 2 R n
A inv := A inv \Gamma
t A inv x t
Predict with " y
Receive reinforcement signal r t 2 R
z
6. Relative loss bounds for the TLS algorithm
In the temporal-difference learning setting we do not know the outcomes
when we need to predict at trial t. So we cannot run
Vovk's linear regression algorithm which uses these outcomes in its pre-
diction. The TLS algorithm approximates Vovk's prediction by setting
the future reinforcement signals to zero (It also clips the prediction into
the range [\GammaY; Y ]). We will show that the loss of the TLS algorithm
is not much worse than the loss of Vovk's algorithm for which good
relative loss bounds are known.
We start by showing two lemmas. The first is a technical lemma. We
use it for proving the second lemma in which we bound the absolute
values of the differences between Vovk's prediction b 0
and the
unclipped prediction e
of the TLS algorithm.
LEMMA 6.1. If a 0, then s t and any vectors x
Proof. From
it follows that
If a ? 0, then the pseudoinverses are inverses and the Sherman-Morrison
formula (2.5) shows that
A
J. FORSTER AND M. K. WARMUTH
Thus A \Gamma1
s and x 0
This proves the lemma for
a ? 0. For the case a = 0 we use Lemma A.2 and let a go to 0. 2
LEMMA 6.2.
Proof. Note that
Thus
Lemma 6:1
YT
-z
-z
now show our main result.
THEOREM 6.1. Consider temporal-difference learning with
and a 0. Let any sequence of examples in
R n \Theta R such that the outcomes y lie in the
real interval [\GammaY; Y ]. Then with the predictions "
the TLS algorithm,
Relative Loss Bounds for Temporal-Difference Learning 13
Proof. Let
. Because of y
we know that (y i.e. the relative loss bound of
Theorem 3.2 (i) also holds for the clipped predictions C Y (p t ). Thus it
suffices to show that
This holds because
4Y
Lemma 6:2
:In the next two subsections we apply Theorem 6.1 to show relative
loss bounds for the worst case and for the average case.
6.1. Worst case relative loss bound
COROLLARY 6.1. Consider temporal-difference learning with
[0; 1) and a ? 0. Let any sequence of examples
in R n \Theta R such that the outcomes y lie in the
real interval [\GammaY; Y ]. Then with the predictions "
the TLS algorithm,
a
A T
an
14 J. FORSTER AND M. K. WARMUTH
Proof. The first inequality follows from Theorem 6.1 and Theorem
3.2 (ii). The second follows from Theorem 3.1. 2
6.2. Average case relative loss bound
If we assume that the outcomes y lie in [\GammaY; Y ] and that the
instances are i.i.d. with some unknown distribution on R n , we
can show an upper bound on the expectation of the relative loss (1.2)
for trials 1 through T that only depends on n; Y; fl; T . In particular
we do not need the term akwk 2 that measures the complexity of the
vector w in the relative loss (1.3), and we do not need to assume that
the instances are bounded. To show this result we will use Theorem 6.1
and will then bound sums of terms x 0
with the
following theorem (Tr(A) is the trace of a square matrix A and dim(X)
is the dimensionality of a vector space X).
THEOREM 6.2. For any t vectors x linear span
Proof. We first look at the case a = choose
any orthonormal basis e em of X t . Then
The above can also be written as
This means that if we interpret the matrix
n\Thetan as a
linear function from R n to R n , it is the identity function on X t . The
assertion for the case a = 0 now follows from
1i;jm
1i;jm
1i;jm
Relative Loss Bounds for Temporal-Difference Learning 15
1jm
1jm
1jm
1jm
To prove the theorem for the case a ? 0 we choose an arbitrary
orthonormal basis e apply the result for a = 0 to
the vectors x 1\Gamman := First note
that
s=1\Gamman x s x 0
. From the case
we have
s=1\Gamman x 0
since the vectors fx 1\Gamman have
rank n. The equality for a ? 0 now follows from
s=1\Gamman x 0
a
t ). The first inequality for the case a ? 0
follows from aTr(A \Gamma1
COROLLARY 6.2. Consider the temporal-difference learning setting
with Assume that the instances x are
i.i.d. with unknown distribution on R n and that the outcomes y
given by (1.1) lie in the real interval [\GammaY; Y ]. Then with the predictions
of the TLS algorithm, the expectation of (1.2) is
Proof. Because of Theorem 6.1 the expected relative loss is at most
Because are i.i.d. and because of Theorem 6.2:
This proves the first inequality of Corollary 6.2. The second follows
from
J. FORSTER AND M. K. WARMUTH
Table
III. The temporal least squares (TLS) algorithm for episodic learning.
At trial t, TLS predicts with
where
rk
CY given by (5.2) clips the prediction to the interval [\GammaY; Y ] and start(k)
is the first trial in the same episode to which trial k belongs.
If A t is not invertible, then the inverse A \Gamma1
t of A t must be replaced by
the pseudoinverse A
t .
7. Episodic learning
Until now we studied a setting where the outcomes y
s=t
are discounted sums of all future reinforcement signals. If we use our
algorithm for policy evaluation in reinforcement learning, this corresponds
to looking at continuing tasks (see Sutton and Barto (1998)).
For episodic tasks the trials are partitioned into episodes of finite length.
Now an outcome depends only on reinforcement signals that belong to
the same episode.
Let t be a trial. The first trial that is in the same episode as trial t
is denoted by start(t) and the last by end(t). With this notation and
with a discount rate parameter fl 2 [0; 1], the outcome y t in the episodic
setting is defined as
s=t
This replaces the definition of y t given in (1.1) for the continuous
setting. The definitions of the relative loss (1.2) and (1.3) remain un-
changed. Note that the continuous setting is essentially the episodic
setting with one episode of infinite length.
With the same motivation as in Section 5 we get the TLS algorithm
for episodic learning which is presented in Table III. An implementation
of this algorithm is given in Table IV. We can check the correctness of
this implementation by verifying that
Relative Loss Bounds for Temporal-Difference Learning 17
Table
IV. Implementation of TLS for episodic learning.
A inv := 1
a
I 2 R n\Thetan
If a new episode starts at trial t, then set z := 0 2 R n
Receive instance vector x t 2 R n
A inv := A inv \Gamma
t A inv x t
Predict with " y
Receive reinforcement signal r t 2 R
z
hold after every iteration of the FOR-loop. This follows from (2.5) and
the equality
A note for the practitioner: If a = 0 and if t is small, then the matrix
A t might not be invertible and our algorithm uses the pseudoinverse of
A t . In practice we suggest to use a ? 0 and tune this parameter. Then
A t is always invertible and the calculation of pseudoinverses can be
avoided. We also conjecture that the clipping is not needed for practical
data.
We now show relative loss bounds for episodic learning. Again we
assume that a bound Y on the outcomes is known to the learner in
advance. The proof of the following theorem is very similar to the proof
of Theorem 6.1.
THEOREM 7.1. Consider temporal-difference learning with episodes
of length at most ' and let a 0. Let
sequence of examples in R n \Theta R such that the outcomes y
given by (7.1) lie in the real interval [\GammaY; Y ]. Then with the predictions
of the TLS algorithm of Table III,
J. FORSTER AND M. K. WARMUTH
If a ? 0, then (7.2) is bounded by
an
If the instances x are i.i.d. with some unknown distribution
on R n , then the expectation of (7.2) is at most
A lower bound corresponding to (7.3) is shown in Theorem 9.2.
Note that Theorem 7.1 does not exploit the fact that different episodes
have varying length. Related theorems that depend on the actual lengths
of the episodes can easily be developed.
8. Other second order algorithms
Second order algorithms for temporal-difference learning have been
proposed by Bradtke and Barto (1996) and Boyan (1999). We compare
the algorithms in the episodic setting. Their algorithms maintain weight
vectors w t and predict with "
at trial t.
Bradtke and Barto's Least-squares TD, or shorter LSTD, algorithm
uses the weight vectors
r s x s
Boyan's LSTD() algorithm (he only considers the case
a parameter 2 [0; 1] like the TD() algorithm. It uses the weight
vectors
z s
where
In contrast our TLS algorithm uses the weight vectors
s
Relative Loss Bounds for Temporal-Difference Learning 19
and the prediction of the TLS algorithm at trial t is "
the above formulas the inverse must be replaced by the pseudoinverse
if the matrix is not invertible.)
Note that the TLS algorithm does not have a parameter like TD()
or LSTD() and that for the case the algorithms LSTD and
are identical. An important difference of the TLS algorithm to
TD() and LSTD() is that it does not use differences in the definition
of the "covariance" matrix.
Bradtke, Barto and Boyan experimentally compare their algorithms
to TD(). Under comparatively strong assumptions Bradtke and Barto
also show that the w t of their algorithm converge asymptotically.
The TLS algorithm we proposed in this paper was designed to minimize
the relative loss (1.3), and our relative loss bounds show that TLS
does this well. We do not know whether similar relative loss bounds hold
for the LSTD and LSTD() algorithms. An experimental comparison
would be useful.
9. Lower bounds
In this section we give lower bounds for linear regression and for episodic
temporal-difference learning. First consider the case of linear regression,
i.e. In this case the outcomes y t are equal to the reinforcement
signals. If the outcomes y t lie in [\GammaY; Y ], then Corollary 6.2 gives the
upper bound of
on the expected relative loss (1.2). Here the examples are i.i.d. with
respect to an arbitrary distribution.
In the next theorem we show that the bound (9.1) cannot be improved
substantially. Our proof is very similar to the proof of Theorem
2 of Vovk (1997). However, if the dimension n of the instances is greater
than one, the examples in his proof are generated by a stochastic
strategy and are not i.i.d.
THEOREM 9.1. Consider linear regression
there is a constant C and a distribution D on the set
of examples R n \Theta [\GammaY; Y ] such that for all T and for every learning
algorithm the expectation of the relative loss (1.2) is
where the examples are i.i.d. with distribution D.
J. FORSTER AND M. K. WARMUTH
Proof. For a fixed parameter ff 1 we generate a distribution D on
the examples with the following stochastic strategy: A vector ' 2 [0;
is chosen from the prior distribution Beta(ff; ff) n , i.e. the components
of ' are i.i.d. with distribution Beta(ff; ff). Then is the distribution
for which the example
n and the example
n . Here e are the unit vectors of R n .
In each trial the examples are generated i.i.d. with D ' . We can calculate
the Bayes optimal learning algorithm for which the expectation of the
loss in trials 1 through T is minimal. The expectation of the relative
loss of this algorithm gives the lower bound of Theorem 9.1.
Details of the proof are given in the appendix. 2
Now consider the setting discussed in Section 7. Here the
trials are partitioned into episodes and the outcomes are given by y
s=t . The following lower bound is proven by a reduction to
the previous lower bound for linear regression.
THEOREM 9.2. Consider episodic temporal-difference learning where
all episodes have fixed length '. Let [\GammaY; Y ] be a range for the outcomes.
Then for every " ? 0 there is a constant C and a stochastic strategy
that generates instances x 1 , x reinforcement signals r 1 ,
such that the outcomes lie in [\GammaY; Y ] and for all T divisible by
' and for every learning algorithm the expectation (over the stochastic
choice of the examples) of the relative loss (1.2) is
Proof. We modify the stochastic strategy used in the proof of Theorem
9.1. When this strategy generates an instance x t and an outcome
y t in trial t, we now generate a whole episode of ' trials with instances
reinforcement signals . The outcomes
for this episode are fl
Consider just the q-th trials from each episode fore some 1 q '.
In these trials the learner essentially processes the scaled examples
the lower bound of Theorem 9.1 applies with a factor
of fl 2('\Gammaq) . All ' choices of q lead to a factor of 1
the lower bound. 2
Relative Loss Bounds for Temporal-Difference Learning 21
10. Conclusion and open problems
We proposed a new algorithm for temporal-difference learning, the
TLS algorithm. Contrary to previous second order algorithms, the new
algorithm does not use differences in the definition of its "covariance
matrix", see discussion in Section 8. The main question is whether these
differences are really helpful. We proved worst and average case relative
loss bounds for the TLS algorithm. It would be interesting to know how
tight our bounds are for some practical data.
In our bounds the class of linear functions serves as a comparison
class. We use a second order algorithm and its additional loss over
the loss of the best comparator is logarithmic in the number of trials.
We conjecture that even for linear regression there is no first order
algorithm with adaptive learning rates for which the additional loss is
logarithmic in the number of trials.
The algorithms analyzed here can be applied to the case when the
instances are expanded to feature vectors and the dot product between
two feature vectors is given by a kernel function (see Saunders, Gam-
merman, & Vovk, 1998). Also Fourier or wavelet transforms can be used
to extract information from the instances, see Walker (1996) and Graps
(1995). With these linear transforms one can reduce the dimensionality
of the comparison class which leads to smaller relative loss bounds.
So far we compared the total loss of the on-line algorithm to the
total loss of the best linear predictor on the whole sequence of examples.
Now suppose that the comparator is produced by partitioning the data
sequence into k segments and picking the best linear predictor for each
segment. Again we aim to bound the total loss of the on-line algorithm
minus the total loss of the best comparator of this form. Such bound
have been obtained by Herbster and Warmuth (1998) for the case of
linear regression using first-order algorithms. We would like to know
whether there is a simple second-order algorithm for linear regression
that requires O(n 2 ) update time per trial and for which the additional
loss grows with the sums of the logs of the section lengths.
Most of our paper focused on continuous learning, where each outcome
is an infinite discounted sum of future reinforcement signals. In
Section 7 we discussed how the TLS algorithm can be adapted to
the setting. Here the outcomes only depend on reinforcement
signals from the same episode:
s=t
For some applications it might make more sense to let the outcomes
y t be convex combinations of the future reinforcement signals of the
22 J. FORSTER AND M. K. WARMUTH
episode and define
s=t
In the case when each outcome would be the average of the future
reinforcement signals. We do not know of any relative loss bounds for
the case when y t is defined as (10.1).
On a more technical level we would like to know if it is really necessary
to clip the predictions of the temporal-difference algorithm we
proposed. Our proofs are reductions to the previous proofs for linear
regression. Direct proofs might avoid clipping.
Another open technical question is discussed at the end of Section
3. We conjecture that the parameter a in Vovk's linear regression algorithm
can be tuned to obtain bounds of the form (3.3) proven for
the (first order) Widrow-Hoff algorithm. Similarly we believe that the
parameter a in the new (second order) learning algorithm of the paper
can be tuned to obtain the bound (4.1) proven for the (first order)
TD () algorithm of Schapire and Warmuth.
Finally, note that we do not have lower bounds for the continuous
setting with fl ? 0. It should be possible to show a lower bound of
on the expected relative loss (1.2). (See Theorem
9.2 for a corresponding lower bound in the episodic case.)
Acknowledgements
Jurgen Forster was supported by a "DAAD Doktorandenstipendium im
Rahmen des gemeinsamen Hochschulsonderprogramms III von Bund
und Landern". Manfred Warmuth was supported by the NSF grant
CCR-9821087. Thanks to Nigel Duffy for valuable comments.
--R
Relative loss bounds for on-line density estimation with the exponential family of distributions
Linear analysis: An introductory course.
On relative loss bounds in generalized linear regression.
Prediction in the worst case.
An introduction to wavelets.
Unpublished manuscript.
Department of Computer Science
Tracking the best regressor.
Additive versus exponentiated gradient updates for linear prediction
Numerical recipes in pascal.
Survey of applicable mathematics
Ridge regression learning algorithm in dual variables.
On the Worst-case Analysis of Temporal-Difference Learning Algorithms
Learning to predict by the methods of temporal differences.
Reinforcement learning: An introduction.
Competitive on-line linear regression
Fast Fourier transforms
Adaptive signal processing.
--TR | temporal-difference learning;relative loss bounds;on-line learning;machine learning |
608352 | Polynomial-Time Decomposition Algorithms for Support Vector Machines. | This paper studies the convergence properties of a general class of decomposition algorithms for support vector machines (SVMs). We provide a model algorithm for decomposition, and prove necessary and sufficient conditions for stepwise improvement of this algorithm. We introduce a simple rate certifying condition and prove a polynomial-time bound on the rate of convergence of the model algorithm when it satisfies this condition. Although it is not clear that existing SVM algorithms satisfy this condition, we provide a version of the model algorithm that does. For this algorithm we show that when the slack multiplier C satisfies \sqrt{1/2} C mL, where m is the number of samples and L is a matrix norm, then it takes no more than 4LC2m4/ε iterations to drive the criterion to within ε of its optimum. | Introduction
The soft margin formulation in (Cortes & Vapnik, 1995) has the advantage that it provides a
design criterion for support vector machines (SVMs) for both separable and nonseparable data
while maintaining a convex programming problem. To maintain a computationally feasible
approach across all kernels, algorithms are developed for the Wolfe Dual Quadratic Program
(QP) problem whose size is independent of the dimension of the ambient space. The Gram
matrix for the Wolfe Dual is is the number of data samples. For large m
the storage requirements for this matrix can be excessive, thereby preventing the application
of many existing QP solvers. This barrier can be overcome by decomposing the original QP
problem into smaller QP problems and employing algorithmic strategies that solve a sequence
of these smaller QP problems. For the class of algorithms considered here these smaller QP
problems are restrictions of the original QP problem where optimization is allowed over a
subset of the data called the working set. The key is to select working sets that guarantee
progress toward the original problem solution at each step. Such algorithms are commonly
referred to as decomposition algorithms, and many existing SVM algorithms fall into this
class (Cristianini & Shawe-Taylor, 2000; Joachims, 1998; Keerthi, Shevade, Bhattacharyya,
1998). In this paper
we provide a model algorithm for decomposition and prove necessary and su-cient conditions
for stepwise improvement of this algorithm. These conditions require that each working set
contain a certifying pair (dened in section 3). Computation of a certifying pair takes O(m)
time. We dene a simple \rate certifying" condition on certifying pairs that enables the proof
of a polynomial-time bound on the rate of convergence. It is not clear that the working sets
chosen by existing SVM algorithms contain certifying pairs that satisfy this condition. On the
other hand, we provide an O(m log m) algorithm for determining a certifying pair that does.
The next section sets the stage for our development by providing a formal denition of the
problem and establishing some of its basic properties.
Preliminaries
be a nite set of observations from a two-class pattern recognition
problem where x 1g. The Support Vector Machine (SVM) maps the
space of covariates X to a Hilbert space H of higher dimension (possible innite), and ts an
optimal linear classier in H. It does so by choosing a map in such a way that
known and easy to evaluate function K. Su-cient conditions for
the existence of such a map are provided by Mercer's theorem (Vapnik, 1998). Let z
so that
A linear classier in H is given by
LANL Technical Report: LA{UR{00-3800 2 Preliminaries
In the soft margin formulation of (Cortes & Vapnik, 1995) the optimal is given by
optimizes the Wolfe Dual quadratic programming problem,
s.t.
where
The choice of the unspecied parameter C > 0 has been investigated but we do not address
that here. Once has been determined the optimal value of b is given by
~ v()
low b ~ v()
high
where ~ v()
low and ~ v()
high are dened in section 3. This paper is concerned with the analysis
of a class of algorithms for WD(S) that are motivated by situations where m is so large that
direct storage of Q is prohibitive.
Let WD(S) denote an instance of the Wolfe Dual dened by the sample set S. Let (S)
represent the set of feasible solutions for WD(S),
Note that (S) is both convex and compact. Denote the Wolfe Dual criterion by
and let (S) represent the set of optimal solutions for WD(S),
R()g:
verifying that Q is symmetric and
positive semi-denite. Thus, R() is a concave function over (S) and R
unique. The Lagrangian for WD(S) takes the form
LANL Technical Report: LA{UR{00-3800 3 Optimality using Certifying Pairs
Then the Karush-Kuhn-Tucker (KKT) conditions (e.g. see (Avriel, 1976), p.96) for WD(S)
take the form
where we have made use of the relation
There are three regimes for i ; two where it equals a bound, and one where it falls between
the bounds. Combining the conditions above with these three regimes we obtain a simpler set
of conditions that are equivalent to the KKT conditions
It is possible to use the satisfaction of these equations as a stopping condition for optimization
algorithms, but they involve . An alternative set of optimality conditions were introduced
in (Keerthi et al., 2001; Keerthi & Gilbert, 2000) that do not use . In the next section we
present these conditions and use them to develop a simple optimality test.
3 Tests for Optimality using Certifying Pairs
We dene a partition of the index set of S based upon the data
I low
I
I
and
low g
and let
i2I low
i2I high
where the sup and inf of the empty set are dened as 1 and 1 respectively.
LANL Technical Report: LA{UR{00-3800 3 Optimality using Certifying Pairs
Denition 1. is properly ordered for S if jV int
low v
high
or jV int
low V int v
We now prove a result rst stated by Keerthi and Gilbert (Keerthi & Gilbert, 2000).
Theorem 1. (Keerthi and Gilbert)
A feasible for the Wolfe dual problem WD(S) is optimal if and only if is properly
ordered for S.
Proof. The optimality conditions (7) can be rewritten as
low
Now suppose that is optimal. Then equations (12) imply that
low
low
The rst equation implies that jV int and the second equation implies that v
low v
high .
When jV int the second and third equations imply that
low V int v
high
and so is properly ordered. On the other hand, suppose is properly ordered. Then jV int
By the denitions of v
low and v
high it is clear that
low
low
and we can choose to be any point in [v
low
high
so that the conditions (12) are satised. Consequently, is optimal if and only if it
is properly ordered for S.
Tests for proper ordering can be simplied if we dene
~
I low = I low [ I int
~
I high = I high [ I int
and
i2 ~ I low
I high
Then is properly ordered for WD(S) if and only if
low ~ v
The proof of this statement follows directly from the proof of Theorem 1.
Lack of optimality can be determined by the existence of a certifying pair.
Denition 2. A certifying pair for 2 (S) is a pair of indices i and j in the index set of S
whose values (v are su-cient to prove that is not properly ordered for
S.
We note that Keerthi et. al. (Keerthi & Gilbert, 2000) refer to this as a violating pair.
However, because we later dene rate certifying pair we decided not to adopt this terminology.
Theorem 2. is not properly ordered for S if and only if there exists a certifying pair. A
certifying pair can be obtained by making at most one pass through the data while making two
comparisons.
Proof. Suppose that is not properly ordered for S. Then there exists indices i 2 ~
I high and
I low such that v i < v j . Choose any such pair. To determine a certifying pair make one pass
through the data while keeping track of indices that represent
high and ~ v
low . Stop at the rst
point where ~
high < ~ v
low .
4 A General Decomposition Algorithm
Algorithmic solutions for the Wolfe dual must consider the fact that when m is large the storage
requirements for Q can be excessive. This barrier can be overcome by decomposing the original
QP problem into smaller QP problems.
Suppose we partition the index set of into a working set W and a non-working set W c .
Note that W indexes a subset of the data. Then are
partitioned accordingly and Q is partitioned as follows
QW c W QW c
W c W . Then (3) can be written
xed this becomes a QP problem of size dim(W ) with the same generic properties
as the original. This motivates algorithmic strategies that solve a sequence of QP problems
over dierent working sets. The key is to select a working set at each step that will guarantee
progress toward the original problem solution.
Theorem 3. Consider the subset constrained Wolfe dual problem dened as follows. Consider
a feasible . Dene a subset W of the index space of S with complement W c . Optimize the
Wolfe dual criterion with respect to subject to the constraint that = on W c . Let
denote a solution to this constrained problem. Then, R( ) > R(), if and only if W contains
a certifying pair for .
Proof. Since R is concave, is non-optimal for WD(S) if and only if there is a feasible innitesimal
_
at such that
> 0: (16)
Further, the solution to the constrained Wolfe dual produces an increase in R if and only if there
is a feasible constrained _
(with nontrivial components on W only) such that dR() _
Consequently, to prove the theorem it is su-cient to show that a feasible _
W exists that satises
if and only if W contains a certifying pair.
The derivative of R is given by
_
. The feasible directions _
satisfy _
when In terms of d these conditions become d high , and
low . Decompose
components under the subsets dened by I high , I low , and I int . Then (16) can be written
d high v high low v low
and the feasibility constraints are
d high 1 low low 0; d int free: (18)
Assume that W contains a certifying pair. Then it must satisfy one of the following inequalities,
low
low
In all four cases we can verify (17)-(18) by choosing d for the certifying pair and
so that
The proof of \if" is nished.
Now assume that there is a feasible _
W for which dR() _
W > 0. Then (17)-(18) are
be the restrictions of V int (I int ) to the indices of W . If
jV int (W )j > 1 then any two components certifying
pair. If jV int (W
d high v high low v low
Combining with (18) gives
d high (v high v low (v low v 1) < 0; d high 0; d low 0
For this inequality to hold at least one of the two terms must be negative. To make the rst
term negative at least one component of (v high v 1) must be negative. Similarly, to make
the second term negative at least one component of (v low v 1) must be positive. Either case
gives a certifying pair. Finally, if jV int (W
d high v high low v low < 0
d high low 1; d high 0; d low 0
Without loss of generality let the components of d high and d low be normalized so that
i2I high
i2I low
Then (d high v high low v low ) is the dierence between convex combinations of V high (W ) and
convex combinations of V low (W ). For this dierence to be negative the two convex hulls must
overlap. This implies a certifying pair. This nishes the \only if" part, so the proof is nished.
Theorem 3 motivates a class of algorithms of the form Algorithm A 1 below. Members from
this class solve a sequence of decomposed QP problems of the form in (15) over working sets
that can vary in size from 2 to jSj and contain at least one certifying pair. The initialization
ensures that W (0) contains at least one certifying pair. The QPSolve routine on line 11 solves
the QP problem restricted to the current working set W (k 1). Line 14 chooses a certifying
pair for inclusion in the next working set. The algorithm terminates when a certifying pair no
longer exists. The AnySubset routine on line chooses a subset of samples to be included with
the certifying pair in the next working set. This subset is irrelevant to the issue of guaranteed
improvement, but is likely to have an eect on the rate of convergence.
e
Algorithm Decomposition Algorithm.
1:
2:
3:
4: OUTPUT:
5:
7:
8: ~
I low
9: ~
I
10: W (0) subset of I S with at least one sample from each class:
12: loop
13:
14: Update membership in ~
I low ; ~
I high for samples in W (k 1)
I low
I high ; and v i > v j
17: if
19: end if
22: end loop
Convergence
In general, the stepwise improvement of Algorithm A 1 is not su-cient to guarantee convergence.
Indeed, Keerthi and Ong (Keerthi & Ong, 2000) provide an example where each working set
contains a certifying pair but Algorithm A 1 does not converge to the optimal solution. However,
convergence results have been proved for some special cases, e.g. see (Keerthi & Gilbert, 2000),
(Chang, Hsu, & Lin, 2000), (Lin, 2000). The convergence result in (Keerthi & Gilbert, 2000)
denes to be -optimal if it satises ~ v
low < ~
high It then shows that the
generalized SMO (GSMO) algorithm converges to a -optimal solution in a nite number of
steps. The GSMO algorithm is a special case of Algorithm A 1 where the AnySubset function
returns the empty set. The analysis in (Keerthi & Gilbert, 2000) leaves open the question of
accuracy with respect to the optimal solution, that is it provides no bound on jR( ) R j or
(Chang et al., 2000) give a proof of convergence for a special case of Algorithm A 1 where
the working set is dened to be the indices corresponding to the nontrivial components of d in
e
the solution to the optimization problem
s.t. d
where q 2. Their proof shows that, with this choice of working set, Algorithm A 1 produces
a sequence f(k)g whose limit point is optimal for WD(S). More recently (Lin, 2000) has
provided a similar proof of convergence for SV M light where the working set is dened by
Joachims (Joachims, 1998) to be the indices corresponding to the nontrivial components of d
in the solution to a slightly dierent optimization problem
s.t. d
where q 2.
The analysis in (Chang et al., 2000) and (Lin, 2000) is asymptotic and therefore leaves open
the question of nite step convergence to the optimum. In the following section we provide a
nite step convergence proof for a special case of Algorithm A 1 that corresponds to \chunking".
5.1 Finite Step Convergence for Chunking
Chunking (as described in (Cristianini & Shawe-Taylor, 2000)) is a decomposition method in
which each working set contains all support vectors from the current solution plus an additional
set of samples that violate an \optimality condition". If the optimality condition is chosen so
that the additional set always contains at least one certifying pair 1 then the resulting algorithm
takes the form of Algorithm A 1 where the AnySubset routine returns, at a minimum, the indices
for all samples with i > 0. The following theorem holds for this class of chunking algorithms.
Theorem 4. Let S be a nite set of observations containing at least one sample from each
class. Consider Algorithm A 1 where the AnySubset routine returns any set that contains the
indices for all samples with i > 0. This algorithm converges to a solution of WD(S) for nite
k.
Proof. Algorithm A 1 terminates only when there are no certifying pairs, and if it terminates
then 2 (S). We assume that QPSolve provides an exact solution to the constrained Wolfe
dual. Then Theorem 3 guarantees that when we are not at a solution the criterion for WD(S)
is strictly increased from one step to the next, i.e. R((k
all nontrivial contribution to R is made by the working set. Thus, no working set is revisited,
and since there are a nite number of working sets, and R is unique, termination in nite k is
guaranteed.
This requires a slight modication to the chunking algorithm in (Cristianini & Shawe-Taylor, 2000).
e
We now show that with the proper choice of certifying pair we can provide polynomial-time
bounds on the run time of Algorithm A 1 .
5.2 Convergence Rate
In this section we give a nite step convergence result for Algorithm A 1 when each working
set contains a rate certifying pair (dened below). We also provide bounds on the convergence
rate. More specically we give a polynomial bound on the number of iterations required to
drive jR() R j to within of its optimum. Note that the criterion has a strong dependence
on the size of the sample set m. In general R becomes unbounded as m ! 1. Consequently
the development of convergence rates requires the normalization of R in terms of the number
of samples. For example, in empirical risk minimization it is standard to divide the number of
training errors by the number of samples to obtain the fraction of training errors. However at
present we know of no natural normalization for R. Therefore to allow for the incorporation
of an appropriate normalization we implicitly denote the error tolerance as a function of m
through the notation m .
Let be an optimal parameter value and R the optimal criterion value.
Because of concavity,
which can be rewritten as
If we dene
we obtain
Let
denote a parameter value which diers from in at most two places and dene
When () () for some 0 < < 1 then we can bound the distance to the optimum by
()=. We use this to determine a bound on the convergence rate for Algorithm A 1 .
Let k denote the value of the state at the k-th iteration and let
k denote a parameter that
diers from k in at most two indices. We note that in previous sections the subscripted k was
used for the k-th component of the vector and the parenthetic (k) was used for the state of
the algorithm at the k-th iteration. However, in the present analysis we need no components
of the vector and feel the use of k for the state at the k-th iteration is a better notation for
this section. Let R
e
and
Denition 3. Algorithm A 1 is a rate certifying algorithm if there exists an such that the
certifying pair chosen on line 14 satises
for all k. A rate certifying pair is a pair of indices in the index set of S for which
at iteration k of a rate certifying algorithm.
Chang, et. al. (Chang et al., 2000) establish a relationship of this type for a particular
choice of rate certifying pair with
use it to prove asymptotic convergence. The
following theorem gives a bound on the number of iterations that are su-cient to drive the
criterion to within m of its optimum for a rate certifying algorithm.
Theorem 5. Let (k) denote the sequence of states generated by Algorithm A 1 . If it is a rate
certifying algorithm then R
BL
iterations, where
(R R( 0
and L is the maximum of the norms of the 2 by 2 matrices determined by restricting Q to
indices. In words, if we wish to get an accuracy of m , then it is su-cient to performq
BL
Proof. Let fi; jg W (k) denote the indices of a rate certifying pair in the working set such
that
Following (Dunn, 1979) we consider the following auxiliary equations. Let
k dier from
k in the two indices
dr k (
e
we have
which can be written
We show by induction that k B k as follows.
We now control k . Plugging the denition of ! k in equation (26) into equation (27) for k
we obtain
In the latter case j
Putting the two equations from (28) together we obtain
where
since k Therefore, by (Dunn, 1979) equations (29) and (30) imply that
but going back through the relations
L and k B k implies
Consequently, when
BL
then
and
The proof is nished.
5.3 E-cient Computation of a Rate Certifying Pair
In the previous section we determined that k k is su-cient to establish
(Chang et al., 2000) show that a certifying pair always exists such that
They
do this by considering the solution to a linear programming (LP) problem (similar to the LP
problem for k ), and then restricting this solution to two indices. In this section we show how
to solve this LP to produce a rate certifying pair in O(m log m) operations.
be the current solution and dene
Let be the solution to the linear program
s.t.
Note that the solution to this problem and (19) are related by . As in section 3,
dene
~
I
low
I
low
I
I
high
I
and choose
low
high
From (Chang et al., 2000) we know that the certifying pair (i; j) given by
is a rate certifying pair with rate
. The following lemma establishes that this pair can
be determined in a computationally e-cient manner.
Lemma 1. Given y, the rate certifying pair (i;
can be computed in O(m log m) time.
Proof. We describe an algorithm that computes this pair in O(m log m) time. Our algorithm
solves the LP in (31) and then computes the two indices using (32)-(33). Once the LP is solved
it is straightforward to implement (32)-(33) in O(m) steps, so we describe only the LP solution.
Consider the LP in (31). Recall that dR() . The Karush-Kuhn-Tucker conditions
for the solution are
with i 0, i 0 and These equations can be written
high
low
int
where
I
low
I
I
e
To solve these equations, x and determine to satisfy
high
low v i 0:
For example, if v i > , then set To determine we
use the constraint
Written out this becomes
i2I
low
| {z }
i2I
high
| {z }
i2I
int
Our strategy is to choose so that it splits the samples into I
low and I
high in such a way that
the rst and second sums cancel as closely as possible. When they do not cancel exactly we
shift so that the split occurs on a value v i , thereby placing samples with this value into I
int
and allowing us to choose their parameters i to satisfy the equality. More specically we sort
the values of v in increasing order and use k to index the sorted list (i.e. v k v k+1 ). As
increases from 1 to 1, jumping over values where being determined as above,
the value of y is monotonically increasing and must pass from negative to positive. In fact it
is easy to see that y increases by C each time an individual sample is jumped. Suppose that
this increasing function achieves the value 0 on the interval (v k ; v k+1 ). Then we let be any
value in this interval and since I
int is empty and was chosen to satisfy (35) we have a solution.
Suppose this increasing function skips the value 0 and jumps from a < 0 to b > 0 at
and there are a total of M 1 samples with this value of v (i.e.
Then set place the rst of these samples in I
low (the rest remain in
I
high ). If a=C is integral then this gives and we have a solution once again (with
M of the samples satisfying (35) with equality and I
before). If a=C is not integral
then its remainder is used to determine k+M 1 , the component of corresponding to v k+M 1 .
This gives and places this sample in I
int , and again we have a solution. Note that
there are many solutions to these equations. This construction gives and , both of which
are necessary to implement (32)-(33). It takes O(m log m) steps to sort the v, followed by an
additional pass through the list to initialize , placing all samples in I
high and yielding (0) y.
Since y begins at (0) y and increases by C each time is increased past a data point, the
components of for all the points up to k
C c are changed by C placing them in I
low .
Then, if (0)y
C is not integral its remainder is used to determine the component of for the
which is moved to I
int . Updating in this way requires at most one complete
pass through the list. This completes the proof.
Algorithm 5.3 computes a rate certifying pair using the method described in the proof
above. In addition to the sort, this algorithm makes a total of four passes through the list.
The number of computations in this procedure can sometimes be reduced. Let i; j be a rate
certifying pair. Then v i and v j are on opposite sides of , and since i; j is also a certifying
e
must lie between ~
high and ~
low (dened in (13) and (14)). This means that the sorting
operation required in our search for can be restricted to the v i in this interval. Since the
sorting operation dominates the run time this can lead to a substantial savings when the number
of samples in this interval is small.
Algorithm Certifying Pair Algorithm.
INPUTS: y, v, and (at the current iteration)
fsample indices for a rate certifying pairg
fL is an ordered list of indices in nondecreasing order of fv i g so that v L(l) v L(l+1) g
finitially place all samples in I
high and compute (0) yg
do
if (y
else if (y
end for
fdetermine split point index and move samples into I
low g
l bEtaDotY=Cc
for l do
end for
fif needed, move sample into I
int g
if (EtaDotY <
l l
else
value in [v L(l use v L(i
fdetermine indices for rate certifying pairg
LANL Technical Report: LA{UR{00-3800 6 Discussion
5.4 Summary of Rates
If we use Algorithm A 2 to choose a rate certifying pair then
2 and by theorem 5 Algorithm
A 1 will drive the criterion to within m of its optimum in no more than
iterations. Further, with so that
neglecting lower order
terms, the number of iterations simplies to
In the case where the working sets are of size two we can use this result to establish a worst
case overall run time for Algorithm A 1 . At each iteration we must solve a 2 by 2 QP problem,
update the v i (k), and determine the next certifying pair. The time to solve the 2 by 2 QP
problem is a constant, and it takes order m operations to update the v i (k). If we add m log m
operations to determine the certifying pair, the worst case run time is of order
Now consider our choice for m obtained through an appropriate normalization of R (see discussion
at the beginning of this section). Because R tends to increase with m, m will be an
increasing function of m. Although the form of this function is not yet known it will clearly
improve the run-time bounds presented above. For example, if then the order of the
polynomial in these bounds is reduced by p.
6 Discussion
This paper considers a class of algorithms for support vector machines that decompose the
original Wolfe Dual QP problem into a sequence of smaller QP problems dened on subsets of
the data. Following the work of Keerthi et al. (Keerthi & Gilbert, 2000; Keerthi et al., 2001)
we provide a scalar condition that is necessary and su-cient for optimality of the QP problem.
This leads naturally to the introduction of certifying pairs as a necessary and su-cient condition
for stepwise improvement, and motivates the use of Algorithm A 1 as a model algorithm for this
problem. By leveraging the results of Chang, et al. (Chang et al., 2000) we have developed
Algorithm A 2 for selecting the certifying pair in Algorithm A 1 . Theorem 5 shows that the
number of iterations for this instantiation of Algorithm A 1 is O(m 4 ) and the overall run time
is O(m 5 log m).
Many existing SVM algorithms are either special cases of Algorithm A 1 or can be made
so through slight modication. For example, Platt's Sequential Minimal Optimization (SMO)
algorithm, which chooses working sets of size two, is designed to choose a pair that give a strict
increase in R at each step (Platt, 1998). The original algorithm however, contains a
aw that
LANL Technical Report: LA{UR{00-3800 References
can lead to improper behavior (Keerthi et al., 2001; Keerthi & Gilbert, 2000). This behavior
can be traced to its inability to guarantee a certifying pair in each working set. By forcing
each working set to contain a certifying pair the corrected algorithm not only has guaranteed
convergence, but also improved performance (Keerthi et al., 2001).
The SV M light algorithm in (Joachims, 1998) uses a modication of Zoutendijk's method
(Zoutendijk, 1970) to choose working sets of size q 2. This choice can be shown to contain
the q=2 largest v i from ~
I low and the q=2 smallest v i from ~
I high , thus guaranteeing at least one
certifying pair.
The chunking algorithm described in (Cristianini & Shawe-Taylor, 2000) and the decomposition
algorithm of (Osuna et al., 1997) both attempt to ensure improvement in R by choosing
working sets that include support vectors from the current solution plus a subset of samples
that violate an \optimality condition" with respect to this solution. A strict implementation of
the algorithms described in these papers can lead to undesirable behavior because they cannot
guarantee a certifying pair in their working sets. However, such a guarantee can be achieved
with a slight modication (as we did for the chunking algorithm in section 5.1).
It is not clear that the algorithms above satisfy the rate certifying condition in Denition 3,
nor that this is necessary to establish rates for them. We have described a new SVM algorithm
that satises the rate certifying condition and has polynomial-time rates. It is not yet clear how
this algorithm will compare with existing algorithms in practice. Note that Keerthi's GSMO
algorithm (Keerthi et al., 2001) and Jochamin's SV M light algorithm (Joachims, 1998) require
O(m) time to determine a certifying pair while A 2 requires O(m log m) time. However, we
know of no bounds on the rates of convergence for GSMO and SV M light (although they seem
to work well in practice), but can guarantee a polynomial convergence rate when we use A 2 .
Finally we note that the polynomial-time bound on the number of iterations scales as
4 , which is unattractive. We leave open the issue of the tightness of this bound, although
we suspect that it may be loose. A closely related issue is the determination of a proper
normalization for R that would give rise to an explicit functional dependence of on m. This
is likely to improve the rate.
--R
Nonlinear Programming: Analysis and Methods (1st edition).
The analysis of decomposition methods for support vector machines.
An Introduction to Support Vector Machines and Other Kernel-based Learning Methods (1st edition)
Rates of convergence for conditional gradient algorithms near singular and non-singular extremals
Convergence of a generalized SMO algorithm for SVM classi
Improvements to Platt's SMO algorithm for SVM classi
On the convergence of the decomposition method for support vector machines.
Support vector machines: training and applications.
Fast training of support vector machines using sequential minimal optimiza- tion
Statistical Learning Theory.
Methods of Feasible Directions: A study in linear and non-linear pro- gramming
--TR
--CTR
Hong Qiao , Yan-Guo Wang , Bo Zhang, A simple decomposition algorithm for support vector machines with polynomial-time convergence, Pattern Recognition, v.40 n.9, p.2543-2549, September, 2007
Tobias Glasmachers , Christian Igel, Maximum-Gain Working Set Selection for SVMs, The Journal of Machine Learning Research, 7, p.1437-1466, 12/1/2006
Rong-En Fan , Pai-Hsuen Chen , Chih-Jen Lin, Working Set Selection Using Second Order Information for Training Support Vector Machines, The Journal of Machine Learning Research, 6, p.1889-1918, 12/1/2005
Thorsten Joachims, Training linear SVMs in linear time, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Nikolas List , Hans Ulrich Simon, General Polynomial Time Decomposition Algorithms, The Journal of Machine Learning Research, 8, p.303-321, 5/1/2007
Don Hush , Patrick Kelly , Clint Scovel , Ingo Steinwart, QP Algorithms with Guaranteed Accuracy and Run Time for Support Vector Machines, The Journal of Machine Learning Research, 7, p.733-769, 12/1/2006
Cheng-Ru Lin , Ken-Hao Liu , Ming-Syan Chen, Dual Clustering: Integrating Data Clustering over Optimization and Constraint Domains, IEEE Transactions on Knowledge and Data Engineering, v.17 n.5, p.628-637, May 2005
Luca Zanni , Thomas Serafini , Gaetano Zanghirati, Parallel Software for Training Large Scale Support Vector Machines on Multiprocessor Systems, The Journal of Machine Learning Research, 7, p.1467-1492, 12/1/2006 | support vector machines;decomposition algorithms;polynomial-time algorithms |
608356 | Scenario Reduction Algorithms in Stochastic Programming. | We consider convex stochastic programs with an (approximate) initial probability distribution P having finite support supp P, i.e., finitely many scenarios. The behaviour of such stochastic programs is stable with respect to perturbations of P measured in terms of a Fortet-Mourier probability metric. The problem of optimal scenario reduction consists in determining a probability measure that is supported by a subset of supp P of prescribed cardinality and is closest to P in terms of such a probability metric. Two new versions of forward and backward type algorithms are presented for computing such optimally reduced probability measures approximately. Compared to earlier versions, the computational performance (accuracy, running time) of the new algorithms has been improved considerably. Numerical experience is reported for different instances of scenario trees with computable optimal lower bounds. The test examples also include a ternary scenario tree representing the weekly electrical load process in a power management model. | Introduction
Many stochastic decision problems may be formulated as convex stochastic
programs of the form
minf
Z
where X R m is a given nonempty closed convex
set,
a closed subset
of R s , the function f 0
from
R m to R is continuous with respect to !
and convex with respect to x, and P is a xed Borel probability measure on
P(
). For instance, this formulation covers (convex) two- and
multi-stage stochastic programs with recourse.
Typical integrands f 0 (; x), x 2 X , in convex stochastic programming problems
are nondierentiable, but locally Lipschitz continuous
on
. In the fol-
lowing, we assume that there exist a continuous and nondecreasing function
nondecreasing function
and some xed element s such that
Heitsch, Romisch
for each x 2 X , where the function c
R is given by
c(!; ~
This means that the function h(k ! 0 describes the growth of the local
Lipschitz constants of f 0 (; x) in balls around ! 0 with respect to some norm
k on R s . An important particular case is that the function h grows poly-
nomially, i.e., 1. For instance, it is
shown in [11] that the choice is appropriate for two-stage models with
stochasticity entering prices and right-hand sides.
It is shown in [4,11] that the model (1) behaves stable with respect to small
perturbations in terms of the probability metric
c (P; Q) := sup
Z
Z
where F c is the class of continuous functions dened by
!) for all !; ~
and probability measures P and Q in the set
P(
Z
(see also the earlier work in [13]). The distance c is a probability metric on
with -structure and also called a Fortet-Mourier (type) metric. In this
generality, it is introduced in [14] and further studied in [10,12]. In particular,
the metric c has dual representations in terms of the Kantorovich-Rubinstein
functional (cf. Section 5.3 in [10] and [12]).
An important instance is that the initial probability measure P is itself discrete
with nitely many atoms (or scenarios) or that a good discrete approximation
of P is available. Its support may be very large so that, for reasons
of computational complexity and time limitation, this probability measure
is further approximated by a probability measure Q carried by a (much)
smaller subset of scenarios. In this case the distance c (P; Q) represents the
optimal value of a nite-dimensional linear program. More precisely, from (4)
we obtain for
c
where J
P(
denotes the Dirac measure placing unit
mass at !. In particular, the metric c can be used to evaluate distances of
specic probability measures obtained during a scenario-reduction process.
Various reduction rules appear in the literature in the context of recent large-scale
real-life applications. We refer to the corresponding discussion in [4], to
Scenario Reduction Algorithms in Stochastic Programming 3
the recent work [3] on scenario generation and reduction, and to the paper
[9], in which an approach to scenario generation based on Fortet-Mourier
distances is given.
In the present paper, we follow the approach for reducing scenarios of a given
discrete probability measure
developed in [4]. It consists in
determining an index set J of given cardinality #J
and a probability measure Q
such that
Problem (7) may be reformulated as a mixed-integer program. In Section 2
we derive bounds for (7) and develop two new algorithms (fast forward selection
and simultaneous backward reduction), which constitute heuristics for
solving (7). We study their complexity and their relations to the algorithms
in [4]. Indeed, the fast forward selection algorithm turns out to be an ecient
implementation of the forward selection procedure of [4], producing the
same reduced probability measures. In order to compare the performance of
the algorithms we provide, in Section 3, explicit formulas for the minimal
distances (7) in case that P is a regular (binary or ternary) scenario tree
(i.e., a tree having a specic structure) and Q is a reduced tree with xed
cardinality n. In Section 4 we report on numerical experience for the reduction
of regular binary and ternary scenario trees. The test trees also include
the ternary scenario tree representing the weekly electrical load process in a
power management model which was considered in [4]. It turns out that the
new implementation of the fast forward selection algorithm is about 10-100
times faster than the earlier version. Furthermore, fast forward selection is
the best algorithm when comparing accuracy. The results of the simultaneous
backward reduction algorithm are more accurate than the backward reduction
variant of [4] in most cases, but at the expense of higher running times.
When comparing running times, fast forward selection (simultaneous backward
reduction) is preferable in case that n < N
Reduction
We consider the stochastic program (1) and select the function c of form (3)
such that the Lipschitz condition (2) is satised. Let the initial probability
distribution P be discrete and carried by nitely many scenarios ! iwith
weights
and consider the probability
measure
4 Heitsch, Romisch
i.e., compared to P , the measure
reduced by deleting all
by assigning new probabilistic weights q j to each scenario
. The optimal reduction concept described above recommends
to consider the probability distance
depending on the index set J and q. The optimal reduction concept (7)
says that the index set J and the optimal weight q are selected such
that D(J ; q
Jg. First we recall the following bounds for min q D(J ;
when the index set J
is xed ([4], Theorem 3.1).
Theorem 2.1 (redistribution)
For any index set J
where C := max
Furthermore, we have equality in (9) and, hence, optimality of q if h 1.
For convenience of the reader the proof of Theorem 2.1 is displayed in the
Appendix
. The interpretation of the formula (10) is that the new probability
of a preserved scenario is equal to the sum of its former probability and of
all probabilities of deleted scenarios that are closest to it with respect to c.
If h 1, we call (10) the optimal redistribution rule.
Next we discuss the optimal choice of an index set J for scenario reduction
with xed cardinality #J . Theorem 2.1 motivates to consider the following
formulation of the optimal reduction problem for given n 2 N, n <
minfD J :=
ng (11)
Problem (11) means that the set has to be covered by two sets
such that J has xed cardinality N n and
the cover has minimal cost D J . Thus, (11) represents a set-covering problem.
It may be formulated as a 0-1 integer program (cf. [7]) and is NP-hard. Since
e-cient solution algorithms are hardly available in general, we are looking
Scenario Reduction Algorithms in Stochastic Programming 5
for (fast) heuristic algorithms exploiting the structure of the costs D J . In the
specic cases of solving (11) becomes quite easy.
In case that #J = 1, the problem (11) takes the form
min
l2f1;:::;Ng
l min
If the minimum is attained at l i.e., the scenario ! l is deleted,
the redistribution rule (10) yields the probability distribution of the reduced
measure
Q. If j 2 arg min j 6=l c(! l holds that q
and q g. Of course, the optimal deletion of a single
scenario may be repeated recursively until a prescribed number N n of
scenarios is deleted. This strategy recommends a conceptual algorithm called
backward reduction.
In case that #J = N 1, the problem (11) is of the form
min
u2f1;:::;Ng
If the minimum is attained at u only the scenario ! u is kept
and the redistribution rule (10) provides
strategy provides the basic concept of a second conceptual algorithm called
forward selection.
First, we take a closer look at the backward reduction strategy. A backward
type algorithm was already suggested in [4,6]. It determines a reduced scenario
set by reducing N n scenarios from the original set of scenarios as
follows. Let the indices l i be selected such that
l min
It can be shown that
is a lower bound of the optimal value of (11). Furthermore, it holds that the
index set is a solution of (11) if for all n, the set
is nonempty ([4,6]). This
property motivates the following algorithm. In the rst step, an index n 1 with
determined using formula (14) such that J
is a solution of (11) for Next, the redistribution rule of Theorem 2.1 is
used. This leads to the reduced probability measure P 1 containing all scenarios
indexed by reduced
by deleting all scenarios belonging to some index set J 2 with #J
and n which is obtained in the same way using formula (14).
This procedure is continued until, in step r, we have n
6 Heitsch, Romisch
Finally, the redistribution rule (10) is used again for the index set J . This
algorithm is called backward reduction of scenario sets. There are still many
degrees of freedom to choose the next scenario in each step. Often there exist
several candidates for deletion. In Section 4 we use one particular implementation
of backward reduction of scenario sets.
Another particular variant consists in the case that #J
This variant (without the nal redistribution) was already announced
in [2,5]. However, numerical tests have shown that the backward
reduction of scenario sets provides slightly more accurate results compared
to backward reduction of single scenarios.
Next we are going to present a new modication of the backward reduction
principle. The major dierence is to include all deleted scenarios into each
backward step simultaneously. Namely, the next index l i is determined as a
solution of the optimization problem
A more detailed description of the whole algorithm, which will be called
simultaneous backward reduction, is given in
Algorithm 2.2 (simultaneous backward reduction)
Sorting of fc
c [1]
ll := min
z [1]
l := p l c [1]
l2f1;:::;Ng
z [1]
Step i: c [i]
kl := min
z [i]
l :=
z [i]
Step N-n+1: Redistribution by (10):
Algorithm 2.2 allows the following interpretation. Its rst step corresponds
to the optimal deletion of only one scenario. For i > 1, l i is chosen such that
where D J [i 1] [flg is dened in (11). Hence, the index l i is dened recursively
such that the index set is optimal subject to the constraint
Scenario Reduction Algorithms in Stochastic Programming 7
that the previous indices are xed.
Since running times are important characteristics of scenario reduction al-
gorithms, we study the computational complexity, i.e., the number of elementary
arithmetic operations, of Algorithm 2.2. It is shown in [6] that a
proper implementation (without sorting) of backward reduction of scenario
sets requires a complexity of O(N 2 ) operations which holds uniformly with
respect to n. When comparing formulas (14) and (16), one notices an increase
of complexity in the cost structure of (16) for determining l i . More precisely,
step i requires the computation of N sums each consisting of i summands
and N i+1 comparisons. Each summand represents a product of two
numbers. One of these factors requires about 2 operations for determining
the minimum. The sorting process in step 1 requires O(N 2 log N) operations
([1], Chapter 1). When excluding the complexity of evaluating the function
c and of the redistribution rule, altogether we obtain
where a(N) := N 3+O(N 2 log N)
operations for selecting a subset of n scenarios. Hence, we have
Proposition 2.3 The computational complexity for reducing a set of N 2 N
scenarios to a subset containing n 2 consists of b N (n)
(see (18)) operations when using simultaneous backward reduction.
Hence, the complexity of simultaneous backward reduction is increasing with
decreasing n and is, of course, minimal at This result corresponds to
the running time observations of our numerical tests reported in Section 4.
Next, we describe a strategy that is just the opposite of backward reduction.
Its conceptual idea is based on formula (13) and consists in the recursive
selection of scenarios that will not be deleted. The basic concept of such
an algorithm is given in [4] and called forward selection. Forward selection
determines an index set fu un g such that
g. The rst step of
this procedure coincides with solving problem (13). After the last step, the
optimal redistribution rule has to be used to determine the reduced probability
measure. Formula (19) allows the same interpretation as in case of
simultaneous backward reduction. It is again closely related to the structure
of D J in (11). Now, let us consider the following algorithm, which is easy to
implement and is called fast forward selection.
8 Heitsch, Romisch
Algorithm 2.4 (fast forward selection)
z [1]
u2f1;:::;Ng
z [1]
Step i: c [i]
z [i]
z [i]
Step n+1: Redistribution by (10):
Theorem 2.5 The index set fu un g determined by Algorithm 2.4 is a
solution of the forward selection principle, i.e., u i satises condition (19) for
each
holds for each
D J [i] is dened in (11).
Proof: For the result is immediate. For holds that
z [i]
Hence, the index u i satises condition (19) and it holds that
z [i]
The conditions (17) and (20) show that both algorithms are based on the
same basic idea for selecting the next (scenario) index. The only dierence
Scenario Reduction Algorithms in Stochastic Programming 9
consists in the use of backward and forward strategies, respectively, i.e., in
determining the sets of deleted and remaining scenarios, respectively.
As in the case of backward reduction, the computational complexity of Algorithm
2.4 is of interest. Step i requires operations for computing
c [i]
operations for z [i]
operations for determining u i . Altogether, we obtain
operations for selecting a subset of n scenarios. Hence, we have
Proposition 2.6 The computational complexity of fast forward selection for
reducing a set of N 2 N scenarios to a subset containing n 2
scenarios consists of fN (n) (see (21)) operations.
Hence, the complexity of fast forward selection is increasing with increasing
n and is maximal if . Thus, the use of fast forward selection is recommendable
if the number n of remaining scenarios satises the condition
(n). The number n such that fN (n
is a zero of a polynomial of degree 3 which depends nonlinearly on N . It turns
out that n Nfor large N .
3 Minimal distances of scenario trees
All algorithms discussed in the previous section provide only approximate
solutions of (11) in general. Since error estimates for these algorithms are not
available, we need test examples of discrete original and reduced measures
of dierent scale with known (optimal) c -distances. Because of their practical
importance, we consider probability measures with scenarios exhibiting a
tree structure. In particular, we derive optimal distances of certain regularly
structured original scenario trees and of their reduced trees containing different
numbers of scenarios.
We consider a scenario tree that represents a discrete parameter stochastic
process with a parameter set f0; and with scenarios
(or paths) branching at each parameter k 2 f0; branching
degree d (i.e., each node of the tree has d successors). In case of
the tree is called binary and ternary, respectively. Hence, the tree
consists of N := d K scenarios
, and has
d K as its root node. Furthermore, we let all scenarios have equal
probabilities
. Such a scenario tree is called regular if,
for each k 2 there exist symmetric sets V k := f- k
d g R
such that
Heitsch, Romisch
where a (K + 1)-tuple of indices (i corresponds
to each index We say that
In case of means that the sets V k are
of the form V respectively, for some
Clearly, it holds that - 0
trees. Figure 1
c
c
c c
c
c c
c c
c
c
@
@ @
@
@ @
@
@ @
@
@ @
Figure
1: Binary scenario tree
shows an example of a regular binary scenario tree with
scenarios. We specify the function c in (3) by setting h 1 and by choosing
the maximum norm k k1 on R K+1 , i.e.,
c(!; ~
Our rst result provides an explicit formula for the minimal distance between
a regular binary tree and reduced subtrees with at least
Proposition 3.1 (3/4-solution)
Let a regular binary scenario tree with
it holds for each n 2 N with N
Proof: We use the representation (22) of each scenario ! i for
Let the
corresponding (K + 1)-tuples of indices. Let l 2 Kg be such that
l 1. Then we obtain
(- r
jr )j
l
(- r
Scenario Reduction Algorithms in Stochastic Programming 11
Hence, it holds for each J that
It remains to show that there exists an index set J such that #J
and that the lower bound is attained, i.e., D
. To this end, we
consider the index set
I := fi
I . Let T r denote the tree consisting of all
I . Figure 2 illustrates a detail of T r starting at a node
r
r
r
r
r r
@ @
@ @
@ @
Figure
2: Detail of the subtree T r
at level k 0 1 and ending at level k 2. Hence, for the cardinality of I and
J we obtain that
4 and #J
Now we want to show that for each j 2 J , there exists an index i 2 I such
that be the related scenario. Let
us consider the behaviour of ! j on the branching levels k 2.
we have to distinguish three cases each for - k0
Case
Case
Case (3): - k0+1
Now, we consider the following (K
all k 62 fk
12 Heitsch, Romisch
denote the corresponding index. Clearly, i 2 I and,
consequently, it holds for the distance between ! i and ! j that
(- r
jr )j
(- r
jr )j
case (1)
case (2)
maxfj2- k0 j; j2- k0 2- k0+1 jg , in case (3)
The latter equation holds due to the assumption that - k0 maxf-
2- k0 . Hence, D
. By considering subsets of J having
cardinality in [1; 3
the result follows for the general case, too. 2
The second result provides a similar formula for the minimal distance between
a regular ternary tree and reduced subtrees containing n 2
9 N scenarios.
Proposition 3.2 (7/9-solution)
Let a regular ternary scenario tree with
Then it holds for each n 2 N with 2
Proof: Similarly as in Proposition 3.1 we obtain
for all
for each subset J of Again we have to show
that there exists an index set J such that #J and that the lower
bound N n
attained with D J . We consider the index set
I := fi
_ (- k0
denote the tree consisting of all
I . Figure 3 illustrates a detail of T r starting at a
Scenario Reduction Algorithms in Stochastic Programming 13
r
r
r
r
r r
@ @
@ @
@ @
@ @
Figure
3: Detail of the subtree T r
node at level k 0 1 and ending at level k 2. We obtain for the cardinality
of I and J that
9 3
9 N and #J
Similarly as in Proposition 3.1 it can be shown that for each j 2 J , there
exists an index i 2 I such that it holds that k!
9 - k0 . By considering subsets of J having cardinality in
9 N ], the result follows for the general case, too. 2
Similar results are available under additional assumptions in case of the Euclidean
norm instead of the maximum norm (see also [6]).
The aim of this section is to report on numerical experience of testing and
comparing the algorithms described in Section 2, namely, backward reduction
of scenario sets, simultaneous backward reduction, fast forward selection. All
algorithms were implemented in C. The test runs were performed on an HP
9000 (780/J280) Compute-Server with 180 MHz frequency and 768 MByte
main memory under HP-UX 10.20, i.e., the same conguration as for the numerical
tests in [4]. We consider the situation where the function c is dened
by c(!; ~
!) and the original discrete probability
measure P is given in scenario tree form. More precisely, we use a test battery
of three binary and ternary scenario trees, respectively. All test trees
are regular and, thus, the results of Section 3 apply. They provide minimal
distances of P to reduced measures supported by n scenarios
when n is not too small.
Example 4.1 (binary scenario tree)
(0:5; 0:6; 0:7; 0:9; 1:1; 1:3; 1:6; 1:9; 2:3; 2:7).
Figure
4 illustrates the original scenario
tree. Proposition 3.1 applies with k
N holds for
each N= 256 n < N .
Example 4.2 (ternary scenario tree)
(0:7; 0:9; 1:2; 1:5; 2:6; 3:3). The tree is shown in Figure 5. Proposition 3.2 applies
with
N holds for each 2N
14 Heitsch, Romisch
Example 4.3 (ternary load scenario tree)
We consider the scenario tree construction in Section 4 of [4] for the weekly
electrical load process of a German power utility (see also [5,8] for a description
of a stochastic power management model and its solution by Lagrangian
relaxation). The original construction is based on an hourly discretization of
the weekly time horizon with branching points at t
and on a piecewise linear interpolation between the t k . The corresponding
mean shifted tree is illustrated in Figure 6. For a moment, we disregard all
non-branching points of the time discretization and consider the corresponding
mean shifted tree. The latter tree is a regular ternary scenario tree with
denotes the standard deviation of the stochastic
load process at time t. Since in this case t increases with increasing t,
Proposition 3.2 applies with k it holds that D min
N for
Finally, it remains to remark that, due to the piecewise
linear structure of the scenarios and the choice of the maximum norm for
dening c, the minimal distance D min
n does not change when including all
non-branching points.
The original scenario trees of the Examples 4.1{4.3 were reduced to trees containing
n scenarios by using all 3 reduction algorithms. The corresponding
tables contain the relative accuracy and the running time of each algorithm
needed to produce a reduced tree with n scenarios. In addition, the tables
provide the (relative) lower bound (15) and the (relative) minimal distance
n in percent if available. Here, \relative" always means that the corresponding
quantity is divided by the minimal c -distance of P and one of its
scenarios endowed with unit mass. In particular, the relative accuracy is dened
as the quotient of the c -distance of the original measure P and the
reduced measure Qn (having n scenarios) and of the c -distance of P and the
rel
c
denotes the set of scenarios of P and ! i is dened by
c
c
Our numerical experience shows that all algorithms work reasonably well.
All algorithms reduce 50% of the scenarios of P in an optimal way. As ex-
pected, simultaneous backward reduction and fast forward selection produce
more accurate trees than backward reduction of scenario sets at the expense
of higher running times. Our results also indicate that fast forward selection
is slightly more accurate than simultaneous backward reduction, although
both backward reduction variants are sometimes competitive. Fast forward
selection works much faster than the implementation of forward selection in
Scenario Reduction Algorithms in Stochastic Programming 15
Figure
4: Original binary scenario tree
Number Backward of Simultaneous Fast Lower Minimal
n of Scenario Sets Backward Forward Bound Distance
Scenarios rel
c
Time rel
c
Time rel
c
Time
Table
1: Results of binary scenario tree reduction
Heitsch, Romisch
Figure
5: Original ternary scenario tree
Number Backward of Simultaneous Fast Lower Minimal
n of Scenario Sets Backward Forward Bound Distance
Scenarios rel
c
Time rel
c
Time rel
c
Time
Table
2: Results of ternary scenario tree reduction
Scenario Reduction Algorithms in Stochastic Programming 17
-50050024 48 72 96 120 144 168
Figure
Original load scenario tree
Number Backward of Simultaneous Fast Lower Minimal
n of Scenario Sets Backward Forward Bound Distance
Scenarios rel
c
Time rel
c
Time rel
c
Time
Table
3: Results of load scenario tree reduction
Heitsch, Romisch
[4]. For instance, fast forward selection required 35 seconds to determine a
load scenario subtree (Example 4.3) containing 600 scenarios instead of 8149
seconds reported in [4]. Especially, in the case of deeply reduced trees, fast
forward selection works very fast and accurately.
Furthermore, it turned out that the lower bound is very good (even optimal)
for large n, but extremely pessimistic for small n. Another observation is
that the reduction of half of the scenarios implies only a loss of about 10%
of the relative accuracy. For instance, in case of Example 4.2 it is possible to
determine a subtree containing just 6 out of the originally 729 scenarios that
still carries about 50% of the relative accuracy.
Finally, we take a closer look at the numerical results of the load scenario
tree reduction. In particular, we compare the running times of simultaneous
backward reduction and fast forward selection in this case. Figure 7 displays10300 100 200 300 400 500 600
Time
in
seconds
Number of scenarios
fast forward
simultaneous backward
Figure
7: Running time for reducing the load scenario tree
the running times of both algorithms and shows clearly their opposite algorithmic
strategies. It re
ects the corresponding theoretical complexity results
(Propositions 2.3 and 2.6) and shows that the running time of fast forward
selection is smaller if n N(approximately). This conrms again that the
forward selection concept is favourable if n is small. Figures 8, 9 and 10
show the reduced load trees with 15 scenarios obtained by all algorithms.
The gures display the scenarios with line width proportional to scenario
probabilities.
Scenario Reduction Algorithms in Stochastic Programming 19
-50050024 48 72 96 120 144 168
Figure
8: Backward reduction / load tree
-50050024 48 72 96 120 144 168
Figure
9: Simultaneous backward reduction / load tree
-50050024 48 72 96 120 144 168
Figure
10: Fast forward selection / load tree
Heitsch, Romisch
Acknowledgement
This research was partially supported by the BMBF project 03-ROM5B3.
Appendix
Proof (Theorem 2.1): Let J I := be an arbitrary index set.
We set c ij := c(!
programming duality implies for any feasible q
In particular, we consider
It holds
ki and
I . Hence, we obtain
Next, we set u i := min
for each i 2 I . Noting that u
all
for all Hence, we obtain for any feasible q that
and the proof is complete. 2
--R
Introduction to Algorithms
Reduktion von Szenariob
in: Handbooks in Operations Research and Management Science
Probability Metrics and the Stability of Stochastic Models
Theory of Probability and its Applications 28
--TR
Introduction to algorithms
Integer programming
Quantitative Stability in Stochastic Programming | scenario reduction;electrical load;probability metric;scenario tree;stochastic programming |
608456 | A statistical approach to case based reasoning, with application to breast cancer data. | Given a large set of problems and their individual solutions case based reasoning seeks to solve a new problem by referring to the solution of that problem which is "most similar" to the new problem. Crucial in case based reasoning is the decision which problem "most closely" matches a given new problem. A new method is proposed for deciding this question. The basic idea is to define a family of distance functions and to use these distance functions as parameters of local averaging regression estimates of the final result. Then that distance function is chosen for which the resulting estimate is optimal with respect to a certain error measure used in regression estimation. The method is illustrated by simulations and applied to breast cancer data. | Introduction
Assume that one is interested in the solution of a problem where the latter is described by a
number of observable variables. It is not necessary that these variables determine the solution
completely, but we assume that there is a correlation between the observable variables and the
solution. Furthermore, we assume that a list of problems of the same type is available for which
the values of the observable variables and the solutions are known. Instead of the categories
problem/solution one may also think of premise/conclusion, cause/effect, current state/final result,
or, more concretely, health condition/observed survival time.
Case based reasoning seeks to solve a new problem by refering to the solution of that problem
which is "most similar" to the new problem. The basic assumption is that the solution of the
problem in the data base which is "most similar" to the new problem is close to the unknown
solution of the new problem. This is different to rule based reasoning where the knowledge is
represented in the rules rather than in a data base built up by previous experience. The phrases
case based reasoning and rule based reasoning were coined in the field of artificial intelligence (see,
e.g., Menachem and Kolodner (1992)).
Crucial in case based reasoning is the decision, which cases "most closely" match a given new
case. In this article we propose a new method for deciding this question. The basic idea is to define
a family of distance functions, to use these distance functions as parameters of local averaging
regression estimates of the final outcome, and to choose that distance function for which the
resulting estimate is optimal with respect to a certain error measure used in regression estimation.
In this article we apply case based reasoning in the context of prediction of survival times of
breast cancer patients treated with different therapies. For each therapy a list of cases is given
which includes observed survival time and variables describing the case. Examples for the latter are
size of primary tumor, number of affected lymph nodes and menopausal status. These categorial
variables are part of the tumor classification system TNM. From each of these lists one selects those
cases which are "most similar" to a given new case. Then a physician tries to gain information
about an appropriate therapy for the new patient by considering the therapy and the final result
A statistical approach to case based reasoning 3
of these cases.
A naive approach to decide which cases "most closely" match a new case is to use only those
cases which have the same values in all variables as the new case. Unfortunately, if the number of
observable variables is large, then there will be usually none or only very few such cases. Therefore
this naive approach is in general not useful.
Another approach was used in the context of predicting survival times of breast cancer patients
by Mariuzzi et al. (1997). There the range of each observable variable is divided into (four)
quartiles. The values of the variables are coded by 1, 2, 3 or 4 according to the quartile to which
they belong to. Then the distance of observable variables (x (1)
which are coded by (i l ), is defined by
d
l
and those cases are chosen which are (with respect to this distance function) most closely to the
new case.
The main drawback of this distance is that it does not reflect a possibly different influence
of each individual variable on the final result (for instance, the values of x (1) might influence the
survival time much more than the value of x (2) ).
In this article we propose a new method to determine that cases of a data base which "most
closely" match the new case. In Section 2 we give a short introduction to nonparametric regression.
The proposed method is described in detail in Section 3. It is illustrated with some simulated data
in Section 4 and applied to breast cancer data in Section 5.
2. Nonparametric regression
independent identically distributed random variables, where
X is R d -valued, Y is real-valued and EY 2 ! 1. In regression analysis one wants to predict Y
after having observed X, i.e., one wants to find a function m : R d ! R such that m (X) is "close
to" Y . If closeness is measured by the mean squared error, one is interested in a function m which
4 J. Dippon et al.
minimizes the so-called L 2 risk Efjm
Introduce the regression function xg. For an arbitrary (measurable) function
R one has
Z
where - denotes the distribution of X. Therefore m and the L 2 risk of an arbitrary function
f is close to the optimal value if and only if the (squared) L 2 error
The optimal predictor m depends on the distribution of (X; Y ). In applications this distribution
will be usually unknown, thus m will be unknown, too. But often it is possible to observe independent
copies Based on the data
the nonparametric regression problem asks for an estimate mn of m such
that the L 2 error
R jmn
2.1. Local averaging estimates
The response variables Y i can be rewritten as
0:
Hence one can consider the Y i 's as the sum of the function m evaluated at X i and a random error
with zero mean. This motivates to construct an estimate of m(x) by averaging over those Y i 's
where X i is close to x (hopefully then m(X i ) is close to m(x) and the average of the ffl i is close to
zero). Such an local averaging estimate can be represented as
are nonnegative weights which sum up to
one.
A statistical approach to case based reasoning 5
The most popular local averaging estimate is the Nadaraya-Watson kernel estimate (Nadaraya
(1964) and Watson (1964)) with weights
hn
hn
R is the so-called kernel function,
is the so-called
bandwidth, and for
hn
Often one uses spherical symmetric kernels K which satisfy for a univariate kernel function R! R,
which we denote again by K,
where jxj is the euclidean norm of x. For such kernel functions one gets
hn
hn
Usually kernel functions are chosen such that K(0) ? 0,
If the so-called naive kernel
are used, then m(x) is estimated by the average of those Y i where X i is in a ball with radius h (1)
and center x. For more general K : R! R+ , e.g. the Gaussian kernel
one uses a weighted average of the Y i 's as an estimate of m(x) where more weight is given to those
are close to x.
2.2. Choice of bandwidth
The choice of the bandwidth hn is crucial for the kernel estimate. If the components of hn are too
small, then only very few of the Y i 's have a weight not close to zero and the estimate is determined
by only these very few Y i 's which induces a high variance in the estimate. On the other hand, if
the components of hn are too big, then there will be Y i 's which are far away from x and which
have a big weight. But if X i is far away from x then even if m is smooth m(X i ) might be far away
from m(x) which induces again a large error into the estimate.
6 J. Dippon et al.
For the quality of the estimate it is important to apply a method which chooses hn close to
the optimal value (which depends on the usually unknown distribution of (X; Y using only the
data Dn . A well known method which tries to do this is cross-validation, which will be described
in the sequel. For further information see H-ardle (1990), e.g.
Recall that the aim in nonparametric regression is to construct mn such that the L 2 risk
is small. So the bandwidth of a kernel estimate should be chosen such that (2) is minimal compared
to other choices of the bandwidth. In an application this is not possible because (2) depends on
the unknown distribution of (X; Y ). What we will propose is to estimate the L 2 risk (2) and to
choose the bandwidth by minimizing the estimated L 2 risk.
For a fixed function f : R d ! R the L 2 risk can be estimated by the so-called
empirical
If we use (3) with (which depends on Dn ) to estimate (2), then this so-called resubstitution
estimate is a too optimistic estimate of the L 2 risk and minimization of it leads to estimates which
are well adapted to Dn but which are not suitable to predict new data independent from Dn . This
can be avoided by splitting the data Dn into two parts, learning and testing data, by computing
the kernel estimate with the learning data and by choosing the bandwidth such that the empirical
risk on the testing data is minimal.
The resulting bandwidth depends on the way the data is splitted. This is a drawback if one
wants to use this bandwidth for a kernel estimate which uses the whole data Dn (and thus has
nothing to do with the way the data is splitted). It can be avoided by repeating this procedure for
several (e.g. randomly chosen) splits of the sample and by choosing the bandwidth such that the
average empirical L 2 risk on the testing data is minimal.
For k-fold cross-validation the splits are chosen in a special deterministic way. Let 1 - k - n.
For notational simplicity we assume that n
k is an integer. Divide the data into k groups of equal
A statistical approach to case based reasoning 7
size n
k and denote the set consisting of all groups except the lth one by D n;l :
For each data set D n;l and bandwidth h 2 R d
construct a kernel estimate mn\Gamma n
;h (x; D n;l
k l+1;:::;ng K
k l+1;:::;ng K
Choose the bandwidth such thatk
l=1n
l
is minimal and use this bandwidth h as bandwidth hn for the kernel estimate (1).
n-fold cross-validation is denoted as cross-validation. In this case D n;l is the whole sample
is minimized with respect to h.
2.3. Curse of dimensionality
If d is large, then estimating a regression function is especially difficult. The reason for this is
that in this case it is in general not possible to densely pack the space of X with finitely many
sample points, even if the sample size n is very large. This fact is often referred to as curse of
dimensionality, a phrase which is due to Bellman (1961). It will be illustrated by an example;
further examples can be found in Friedman (1994).
independent identically distributed R d -valued random variables with X
uniformly distributed in the hypercube [0; 1] d . Consider the expected supremum norm distance of
X to its closest neighbor in X 1
ae
min
oe
(where k(x
For
ae
min
oe
8 J. Dippon et al.
thus
Z 1P
ae
min
oe
For instance, d1 (10; 1000) - 0:22 and d1 (20; 10000) - 0:28. Thus, for d large, even for a large
sample size n, the supremum norm distance of X to its closest neighbor in the sample is not close
to zero (observe that the supremum norm distance between any two points in the above sample is
always less than or equal to one).
3. Basic idea
In this section we will apply the concepts of nonparametric regression introduced in the previous
section to case based reasoning. To do this, we will consider the prediction of the final result in
case based reasoning as a regression estimation problem. Hence we assume that the final result
is a real valued random variable and that the observable variables are components of a R d -valued
random variable In applications often some of the observable variables are
categorical random variables. How to handle such variables will be described in Section 5 below.
Recall that our aim in case based reasoning is to determine those cases of a given list which
"most closely" match a new case. This can be done by defining a distance function
which determines a distance d(x; x i ) between two cases with observable variables x and x i . Given
such a distance function one can define the k "most similar" cases to a new case as those k cases
among whose distances to the new case x belongs to the k smallest occuring distances.
The basic idea to determine such a distance function is to define a regression estimate depending
on this distance function and then to choose that distance function which minimizes (an estimate)
of the L 2 risk of the resulting regression estimate.
As regression estimates we use kernel estimates introduced in the previous section. For
define a distance function dh : R d \Theta R d ! R+ by
dh
=@ d
A
A statistical approach to case based reasoning 9
Then the kernel estimate can be written as
so the family of distance functions
\Psi corresponds to a family of regresion estimates
is the kernel function. In our simulations in Section 4 and
in our application in Section 5 we use the Gaussian kernel
Our aim is to choose the distance function such that the corresponding regression estimate has
minimal L 2 risk. As already explained in the previous section, this is not possible because the L 2
risk depends on the unknown distribution of (X; Y ). What we do instead is to estimate the L 2 risk
by cross-validation and to choose the distance function dh (h 2 R d
which the estimated L 2
risk of m n;h is minimal.
Our method can be summarized as follows: Estimate the L 2 risk of a kernel estimate m n;h by
cross-validation, determine h 2 R d
such that this estimated L 2 risk is minimal (compared to all
other choices h 2 R d
use the corresponding distance function dh .
3.1. Subset selection
We have seen in the previous section that estimating the regression function is very difficult if the
dimension d of X is large. This problem occurs for every estimate, hence for the kernel estimate
which we use in this paper as well. The only way to handle this problem is to make assumptions on
the underlying distibution (e.g. to assume that the regression function is additive, cf. Stone (1985),
Stone (1994). In this paper we assume that, even if d is large (say 20), the regression
function mainly depends on a few (say 3 to 6) of the components of X. If this assumption holds,
then a distance function can be determined by applying our method to this small subset of the
observable variables (which leads to a regression estimation problem with small dimension of X).
Of course, in applications we will not know on which of the observable variables the regression
function mainly depends. In order to check this we consider all (or if d is too big all small) subsets
of the observable variables, apply our method to each of these subsets and choose that subset for
which the via cross-validation estimated L 2 risk of the "optimal" estimate is minimal.
J. Dippon et al.
3.2. Simplification of computation
For each subset of the observable variables and for each variable of this subset the method described
above requires computation of a vector of scaling factors which leads to a multivariate minimization
problem. In order to avoid solving many such minimization problems, we use the following
simplification: In a first step determine for each of the d observable variables a univariate scaling
n by computing (as described above) the "optimal" bandwidth of a univariate kernel estimate
fitting the data (X (i)
This yields scaling factors h (1)
In a second step we choose for each subset fi
d g of dg the distance function
h@
d
on R d , such that the (via cross-validation) estimated L 2 risk of
is minimal with respect to h 2 R+ . Finally we choose that subset fi
d g of
which the (via cross-validation) estimated L 2 risk of the corresponding optimal kernel estimate is
minimal.
Observe that minimization of the estimated L 2 risk of m n;h (h 2 R+ ) is only a univariate
minimization problem.
3.3. Robustification of the bandwidth estimation
It is well known that bandwidth selection by cross-validation is often highly variable (see Simonoff
(1996)). Some authors suggested alternatives such as plug-in methods and claimed that these are
superior to ordinary cross-validation. But Loader argues that no theoretical results support these
claims and that the plug-in methods itself depends sensitively on a pilot estimate of the bandwidth
(see Ch. 10 in Loader (1999)).
Let m n;h be a kernel estimate with bandwith h. In order to robustify the cross-validation
bandwidth selection given by
A statistical approach to case based reasoning 11
with CV function cv(h) as defined in (4), we suggest the following heuristic rule:
where
and
is specified below.
The idea behind the rule is the following. Often the graph of the CV function resembles a
valley with possibly several local minima at the bottom or with a flat bottom where the position of
the global minimum seems to be accidental. Then an estimate of the middle of the valley appears
to be a less variable measure.
Choose two parameters We define a level r p;q in dependence of
For the regression estimate equals the mean of the responses fY
coincides with the empirical variance 1
Yng. In
our simulations we first used
but for this choice it might happen that r p;q ? sup h?h0 cv(h). In order to avoid this we set
r p;q := cv(h
For values of p and q we used
in this paper. Our robustification implies that we possibly dispense with explaining a fraction
of the empirical variance of fY g. This doesn't seem to be too harmful,
since we are more interested in finding similarity neighborhoods than in prediction.
J. Dippon et al.
4. Application to Simulated Data
4.1. Simulated Model
Let us consider the defined by
where a 2 (0; 1] is a fixed constant, and the model
with independent random variables X - U ([0; 1). Apparently, the third component
X (3) has no influence on Y . If a is small, the first component X (1) of the random vector X
has a larger influence on the response Y than the second component X (2) . In other words, for a
small constant a the regression function m is almost constant with respect to X (2) ; in any case m
is constant with respect to X (3) (Figure 1).
We simulate realizations of (X independent copies
of (X; Y ). Then, as described in Subsections 3.2 and 3.3, for each i 2 f1; 2; 3g a robustified
estimate e h (i)
0 of the "optimal" bandwidth h (i)
0 of the univariate kernel estimator based on the sample
computed by cross-validation. If a is significantly less than 1, then we
expect the relation
1. Furthermore, the corresponding estimated L 2
risks should be in ascending order, too (Figure 2). For
Our model selection procedure defines different distance functions on [0; 1] 3 as given
by (5). For each distance function we compute the minimal estimated L 2 risk of the multivariate
kernel estimator (6) with respect to h ? 0 and consider that distance function as best for which
the estimated L 2 risk is minimal. If inclusion of a further component reduces the estimated L 2
risk only "slightly", the distance function with the smaller set of components but slightly larger
corresponding estimated L 2 risk will be preferred.
We use the resulting distance functions to define neighborhoods
A statistical approach to case based reasoning 130.20.61
Component 1 of X0.20.61
Component 2 of X
Fig. 1. Graph of the regression function m for (considered as function of the first two components of
the predictor variable X) and scatter plot of simulated realizations of (X
For each ffi ? 0 we consider every point in N (x; ffi ) to be more similar to x than every point outside
of N (x; ffi ). In this sense those realizations of X are the k most similar ones to x whose
distances belongs to the k smallest among
4.2. Results of the Simulations
For each pair (a; n) of parameters a 2 f0:3; 0:5; 1g and n 2 f100; 200g the simulation is repeated
20 times but with different seeds of the generator producing the (pseudo) random numbers. Below
we discuss the results of the simulations performed with parameter set (a; n) = (0:5; 100) in some
detail.
As suggested in Section 3 we compute bandwidths e h (1)
0 . In all but one of the
twenty runs the procedure found
0 . In most cases the CV curves and the
univariate regression estimates corresponding to the minimal value of the CV curve look as in
14 J. Dippon et al.
Figure
2. Furthermore, the optimal value h (i)and the robustified value e h (i)differ only slightly.
However, in some cases the situation appears as in Figure 3. There the smooths related to e h (i)seem to be appropriate. Only in one of the twenty runs the method fails to detect that the
first component has a stronger influence on the function values of m than the second component,
see
Figure
4.
To compare multivariate kernel estimates (6) with distance functions selecting different subsets
of prediction variables we compute the minimum cv(h 0 of the estimate cv(h)
(given by (4)) of the L 2 risk of (6) and compare the ratios cv(h 0 )=cv(1). Table 1 shows that
for each run the subset selection procedure favors fX (1) ; X (2) g or fX (1) g. But in cases
is preferred the gain is too small to be significant. Hence in all 20 runs
we choose subset fX (1) ; X (2) g.
The relation of e h (1)
determines the geometry of the neighborhoods which will be used
to characterize "similar cases". For the parameter pair (a; n) = (0:5; 100) the ratios of the the
computed bandwidths e h (1)
turned out to be
2, 0.16, 0.54, 0.45, 0.60, 0.50, 0.23, 0.57, 0.34, 0.76
From the second row of Table 2 one can extract minimum, maximum, median and interquartile
range of this ratios.
To visualize the variability of the resulting distance functions neighborhoods
projected on the first two components are plotted around center 0:5). The parameter
chosen in such a way that the area of these sets equals 1=10, see Figure 5.
Comparing the results for sample sizes indicates that larger sample sizes
lead to less variable neighborhoods.
A statistical approach to case based reasoning 15
y
Component 2
y
y
Fig. 2. Determination of univariate bandwidths e h (1); e h (2); e h (3)for the 6th simulation run. Left column: Cross-validated
L2 risk of the regression estimate of E(Y jX (i) ) depending on the bandwidth h (i) . The optimal
bandwidth h (j)and the robustified bandwidth e h (j)are indicated by tick marks (if within the range of the x-
axis). Right column: Univariate kernel estimate of E(Y jX (i) ) using the optimal bandwidth h (i)(dotted) and
the robustified bandwidth e h (i)(solid), i 2 f1; 2; 3g. The finding that the curves related to h (i)and e h (i)can be
hardly distinguished is true for most of the simulated samples.
J. Dippon et al.
Component 1
y
Component 2
y
Component 3
y
Fig. 3. Determination of univariate bandwidths e h (1); e h (2); e h (3)for the 3th simulation run. Left column: Cross-validated
L2 risk of the regression estimate of E(Y jX (i) ) depending on the bandwidth h (i) . The optimal
bandwidth h (j)and the robustified bandwidth e h (j)are indicated by tick marks (if within the range of the x-
axis). Right column: Univariate kernel estimate of E(Y jX (i) ) using the optimal bandwidth h (i)(dotted) and
the robustified bandwidth e h (i)(solid), i 2 f1; 2; 3g. Despite the fact that the bandwidths h (i)minimizes the
CV error criterion, these bandwidths are not useful to compare the dependency of the random variable Y on
. For this sample the robustified bandwidth e h (i)seems to be more appropriate.
A statistical approach to case based reasoning 17
Component 1
y
Component 2
y
y
Fig. 4. Determination of univariate bandwidths e h (1); e h (2); e h (3)for the 11th simulation run. Left column:
Cross-validated L2 risk of the regression estimate of E(Y jX (i) ) depending on the bandwidth h (i) . The
optimal bandwidth h (j)and the robustified bandwidth e h (j)are indicated by tick marks (if within the range of
the x-axis). Right column: Univariate kernel estimate of E(Y jX (i) ) using the optimal bandwidth h (i)(dotted)
and the robustified bandwidth e h (i)(solid), i 2 f1; 2; 3g. This is the only of all 20 samples for which the
suggested method fails to detect that the first component has a stronger influence on the function values of
m than the second component.
J. Dippon et al.
Table
1. Ratio cv(h0)=cv(1) of estimated L2 risk of the multivariate
kernel estimate with the specified set of components. Parameters of
the simulated model were a
Run Selected Subset
9 0.40 0.97 1.00 0.19 0.43 0.98 0.23
A statistical approach to case based reasoning 19
Table
2. Statistics of the ratios e h (1)= e h (2)each computed from 20 simulation runs.
Parameters Statistics of e h (1)= e h (2)a n Min. 1st Quart. Median Mean 3rd Quart. Max.
1.0 200 0.14 0.93 1.10 1.02 1.16 1.43
Component 1 of X
Componentof
Component 1 of X
Componentof
Component 1 of X
Componentof
Component 1 of X
Componentof
Component 1 of X
Componentof
Component 1 of X
Componentof
Fig. 5. These plot show level lines of the regression function m for parameter a 2 f0:3; 0:5; 1g and (cutted)
ellipsoidal neighborhoods around the point (0:5; 0:5). Each neighborhood set is computed from one of the
simulated samples of size contains 1/10th of the unit square.
J. Dippon et al.
5. Application to Breast Cancer Data
We compute a distance function on the space of covariates as suggested in Section 3 to determine
for a given new breast cancer patient (with unknown survival time) "similar" cases among patients
in a database with known (censored) survival time. The data were made available by the Robert-
Bosch-Krankenhaus in Stuttgart, Germany. They are collected between the years 1987 and 1991
with a follow-up of 80 months. Each of the cases is described by a 10-dimensional
parameter vector. We consider the nine first parameters as predictor variables. It includes age
at diagnosis AGE (in years), menopause status MS (which equals 1 and 2 for pre and post
menopause, respectively), histological type of the breast cancer HT (with values in
number of affected lymph nodes PN (grouped into four classes with values in
size PT (grouped into four classes occurence of metastases PM (with values 0 and 1
for no and yes, respectively), grading of tumor GR (with values in f1; 2; 3g), estrogene status ES
(with values 1 and 2 for positive and negative, resp.) and progesterone status PS (with values 1
and 2 for positive and negative, resp.
The last component of the parameter vector describes the observed (censored) survival time
OST (in years) and is considered as reponse variable. In many cases the actual survival time
can't be observed, since the patient is still alive after the end of the study or because of the
patient's withdrawal from the study. Hence the observed survival time can be understood as the
minimum of the actual survival time T (time elapsed from date of diagnosis to date of death) and
a censoring time C (time between date of diagnosis and a date at which the patient is known to be
alive). In our approach we estimated the censored survival time instead of the more complicated
and often unknown uncensored survival time in order to simplify the problem. We hope that
this simplification doesn't affect our result too much, because we used the estimate to construct
similarity neighborhoods rather than to estimate the survival time as a function of a covariate
vector (as to the latter compare Carbonez et al. (1995)).
AGE and OST are continuously distributed r.v.'s, PT , PN , GR are ordered categorical r.v.'s,
and MS, HT , PM , ES and PS are nominal r.v.'s. Values in the predictor variables are allowed
A statistical approach to case based reasoning 21
Table
3. Robustified bandwidths e h (j)for the the univariate regression problems.
The products 1= e h (j)times the spread of component j allow to compare the maximal
influence of component j on the distance function.
component AGE MS HT PT PN PM GR ES PS
to be missing.
Assume that is the covariate vector of a new patient and z = (z
is the covariate of a patient in the database fz related
to a continuous or ordered categorical r.v., we define the distance function d (j)
h by
d (j)
z (j) are not missing
k and z (j)
l are not missingg otherwise
is related to a nominal r.v., we define the distance function d (j) by
d (j)
z (j) and both x (j) and z (j) are not missing
z (j) or x (j) or z (j) is missing
As described in Section 3, for each j 2 we compute by cross-validation "optimal"
0 for the univariate regression estimates
n;h
where y i is a realization of the response variable Y i . Figures 6 and 7 show the CV function cv and
the regression estimates related to the robustified estimate e h (j)
0 as displayed in Table 3.
Now, on the range X of the covariate variable X we define for each l 2 for each
22 J. Dippon et al.
50 100 150 20011.82AGE
Observed
Survival
Time
Observed
Survival
Time
1.0 1.2 1.4 1.6 1.8 2.02610
12.6 .
Observed
Survival
Time
Observed
Survival
Time
PN
Observed
Survival
Time
A statistical approach to case based reasoning 23
Observed
Survival
Time
Observed
Survival
Time
Observed
Survival
Time
1.0 1.2 1.4 1.6 1.8 2.02610h
PS
Observed
Survival
Time
J. Dippon et al.
Table
4. For each number l 2 of covariates that subset
is displayed which possesses the smallest estimated L2 risk.
l Best subset cv(h l
6 AGE, PT, PN, PM, ES, PS 0.7894
7 AGE, HT, PT, PN, PM, ES, PS 0.7894
8 AGE, MS, HT, PT, PN, PM, ES, PS 0.7932
9 AGE, MS, HT, PT, PN, PM, GR, ES, PS 0.8281
subset dg the distance function d
where h is a fixed positive number, and we obtain the multivariate regression estimate
compute its CV function cv as a function of h ? 0, and determine that subset J l :=
with smallest global minimum cv(h l ) of the related CV function. Table 4 shows the results of this
model selection step.
Together with Table 4 the scree plot in Figure 8 suggests to choose that distance function which
includes the variables PN , PM , ES. By adding a fourth component the relative improvement of
the ratio cv(h 4 )=cv(1) compared to cv(h 3 )=cv(1) is less than 0:05. Hence, we propose to use the
distance d, defined by
d (PN)
0:71
d (PM)
1:42
d (ES)
0:79
Now, for a given new case with covariate x compute r i := d(x; z i
the z 0
according to increasing values of the r 0
s: fz g. Then, for fixed k 2
A statistical approach to case based reasoning 25
Number of Components
Fig. 8. Scree plot of (l; cv(h l
9g.
the subset fz of the database consists of that k cases which are "most similar" to the
new case with respect to the distance d. Their history logs may help the physician in choosing an
appropriate therapy for the new patient.
6. Discussion
As it is often the case with nonparametric estimators in a multivariate setting, the suggested
method has its deficiencies and limitations, too. For instance, assume that the random vector
26 J. Dippon et al.
takes on the values (0; 0), (1; 0), (0; 1) and (1; 1) each with probability 1=4, and let
Y be defined by 0, if x 2 f(0; 0); (1; 1)g, and by 1 otherwise. Then E(Y jX (1)
but m is far from being a constant. Hence, in this case, the proposed method will fail.
Adjusting the so-called resubstitution estimate of the L 2 risk by penalizing functions leads
to various other bandwidth selection procedures such as generalized cross-validation, Shibata's
model selector, Akaike's information criterion, Akaike's finite prediction error and Rice's T (see,
e.g., H-ardle (1990), H-ardle (1991). These may be used instead of the least squares cross-validation
criterion.
Further research should improve the regression estimate. For instance, dimension reduction as
handled in projection pursuit regression is adapted to find structures in linear subspaces of the
covariate space. Additionally, the fact of censored observation times should be taken into account.
In our application concerning survival times of breast cancer patients the influence of the chosen
therapy was ignored. This might lead to an underestimation of the influence of known predictor
variables. The reason is that the choice of the therapy usually depends on some of the predictor
variables and that the therapy has an influence on the survival time. There isn't a problem, if one
has data for which the choice of the therapy is independent of the predictor variables. Such data
does not exist for the predictors we used. But this requirement is fulfilled for any new predictor
which was unknown in the past or was considered to be unimportant and therefore was not used
for choosing the therapy. A distance function which is additionally based on such a new predictor
allows to judge the importance of the new predictor.
7.
Acknowledgements
This research was partly supported by the Robert Bosch Foundation, Stuttgart. We are grateful
to Prof. H. Walk, Stuttgart, for stimulating discussions.
Readers wishing to obtain the breast cancer data or the S-Plus code of the algorithms should
contact the authors.
A statistical approach to case based reasoning 27
--R
Adaptive Control Processes.
An overview of predictive learning and function approximation.
Theory and Pattern Recognition Applications.
Applied Nonparametric Regression.
Smoothing Techniques with Implementations in S.
Local Regression and Likelihood.
On estimating regression.
Smoothing Methods in Statistics.
Additive regression and other nonparametric models.
The use of polynomial splines and their tensor products in multivariate function estimation.
Smooth regression analysis.
--TR
Random approximations to some measures of accuracy in nonparametric curve estimation | robustness;case based reasoning;nonparametric multivariate regression estimation;kernel estimation;band-width selection |
608626 | Making Complex Articulated Agents Dance. | We discuss the tradeoffs involved in control of complex articulated agents, and present three implemented controllers for a complex task: a physically-based humanoid torso dancing the Macarena. The three controllers are drawn from animation, biological models, and robotics, and illustrate the issues of joint-space vs. Cartesian space task specification and implementation. We evaluate the controllers along several qualitative and quantitative dimensions, considering naturalness of movement and controller flexibility. Finally, we propose a general combination approach to control, aimed at utilizing the strengths of each alternative within a general framework for addressing complex motor control of articulated agents. | Introduction
Control of humanoid agents, dynamically simulated or physical, is an extremely difficult problem
due to the high dimensionality of the control space, i.e., the many degrees of freedom (DOF) and
the redundancy of the system. In robotics, methods have been developed for simpler manipulators
and have been gradually scaled up to more complex arms (Paul 1981, Brady, Hollerbach, Johnson,
Lozano-Perez & Mason 1982) and recently to physical human-like arms (Schaal 1997, Williamson
1996). Anthropomorphic control has also found an application area in realistic, physically-based
animation, where dynamic simulations of human characters, involving realistic physical models,
matches the complexity of the robotics problem (Pai 1990, Hodgins, Wooten, Brogan & O'Brien
1995, van de Panne & Lamouret 1995).
In this paper, we present three controller implementations to address the tradeoffs involved
in different approaches to articulated control, including joint-space control and Cartesian control,
and their relevance to the different application areas, including biological models, robotics, and
animation. The three controllers are implemented on a physics-based humanoid torso simulation,
and applied to the task of performing a continuous sequence of smooth movements. The movement
sequence chosen is the popular dance "Macarena", which provides a non-trivial, well-defined task
M. Matari'c et al.
for comparison. The particular controllers are: joint-space torque control, joint-space force-field
control, and Cartesian impedance control. The paper describes each approach, and compares its
performance with human data. The speed and smoothness of the resulting motions are evaluated,
along with other qualitative and quantitative measures.
The rest of the paper is organized as follows. Section 2 gives the relevant background and
related work in manipulator control, including biological, robotics, and animation issues. Section
3 describes Adonis, our humanoid simulation test bed. Section 4 gives a detailed specification of
our task. Section 5 describes a joint-space torque controller and Section 6 describes the joint-space
force-field-based controller. Section 7 contrasts those methods with a Cartesian impedance
controller. Section 8 presents a detailed performance analysis and comparison of the methods along
several criteria including qualitative and quantitative naturalness of appearance and controller use
and flexibility. Section 9 describes our continued work toward a combination approach to articulated
control, and Section 10 concludes the paper.
2. Background and Related Work
Computer animation and robotics are two primary areas of research into motion for artificial agents.
This section briefly reviews each, and then introduces some biological inspiration for the types of
control we will discuss.
2.1. Control in Robotics
In robotics, manipulator control has been largely, but not exclusively, addressed for point-to-point
reaching. Position control of manipulators is a mature area of research offering a variety of standard
techniques. A review of robotics methods can be found in Craig (1989), Paul (1981), and Brady
et al. (1982). Solving the inverse kinematics (IK), or finding the relevant joint angles to obtain a
desired end-point position and orientation for a given manipulator, is a difficult task, especially when
the structure is redundant (Baker & Wampler II 1988). Rather than solving the inverse kinematics
analytically, some techniques linearize the system kinematics about the operating point, using either
the Jacobian (Salisbury 1980), or the inverse Jacobian (Whitney 1969) to achieve position control.
The uses of the pseudo-inverse of the Jacobian for redundant systems has also been explored (Klein
Huang 1983).
Control methods which were originally used for force control such as hybrid position/force control
(Raibert & Craig 1981), inspired work on stiffness control (Salisbury 1980) and the more general
impedance control (Hogan 1985) which can be used to control the end-point position (see Section 7).
Nearly all of these techniques have been augmented to include models of the robot's dynamics in
order to improve the accuracy of control. The most common example is the computed torque
method, where the inverse dynamics of the manipulator are solved to provide feed-forward torques
during a motion (Luh, Walker & Paul 1980).
In addition, learning methods, using a variety of techniques (neural networks, fuzzy logic, adaptive
control, etc.) have also been explored and continue to be applied to these problems (Atkeson
1989, Schaal & Atkeson 1994, Slotine & Li 1991, Jordan & Rumelhart 1992).
2.2. Control in Computer Animation
In computer graphics, 3D character animation has traditionally been created by hand, but recent-
ly, physical modeling has been used to automatically generate realistic motion. Current techniques
for physical modeling can be classified by their level of automation; some methods minimize user-specified
constraints with an automatic solver while others rely on controllers that require stronger
KLUWER STYLE FILE 3
user intervention. For example, Witkin & Kass (1988) presented a constraint-based approach with
specified start and end conditions that generated motion containing characteristics such as anticipation
and determination. Cohen (1992) extended this approach with higher DOF systems and more
complex constraints. Ngo & Marks (1993) introduced a constraint approach to creating behaviors
automatically using genetic algorithms.
Hand-tuned control of dynamic simulations has been applied successfully to more complex systems
such as articulated full-body human figures. Dynamic simulation has been used to generate
graphical motion by applying dynamics to physically-based models and using forward integration.
Simulation ensures physically plausible motion by enforcing the laws of physics. Pai (1990) simulated
walking gaits, drawing strongly from robotics work. His torso and legs use a controller based on
high-level time-varying constraints. Raibert & Hodgins (1991) demonstrated rigid body dynamic
simulations of legged creatures. Their hand-tuned controllers consist of state machines that cycle
through rule-based constraints to perform different gaits. Hodgins et al. (1995) extended this work
to human characters, suggesting a toolbox of techniques for controlling articulated human-like
systems to generate athletic behaviors such as 3D running, diving, and bicycling. van de Panne
used search techniques to find balancing controllers for human-like character
locomotion, aiming at more automatic control of such simulated agents.
Other methods for generating animation automatically exist as well, including motion capture
and procedural animation, but are not as relevant to the controller work presented here. For a more
complete review of control in computer animation, see Badler, Barsky & Zeltzer (1991).
2.3. Control with Biological Motivation
The flexibility and efficiency of biological motion provides a desirable model for complex agent
control. Our work is inspired by a specific principle derived from evidence in neuroscience. Mussa-Ivaldi
Giszter (1992), Giszter, Mussa-Ivaldi & Bizzi (1993) and related work on spinalized frogs
and rats suggests the existence of force-field motor primitives that converge to single equilibrium
points and produce high-level behaviors such as reaching and wiping. When a particular field is
activated, the frog's leg executes a behavior and comes to rest at a position that corresponds to
the equilibrium point; when two or more fields are activated, either a linear superposition of the
fields is obtained, or one of the fields dominates (Mussa-Ivaldi, Giszter & Bizzi 1994). This suggests
an elegant organizational principle for motor control, in which entire behaviors are coded with
low-level force-fields, and may be combined into higher-level, more complex behaviors.
The idea of supplying an agent with a collection of basis behaviors or primitives representing
force-fields, and combining those into a general repertoire for complex motion, is very appealing. Our
previous work (Matari'c 1995, Matari'c 1997), inspired by the same biological results, has successfully
applied the idea of basis behaviors to control of planar mobile agents/robots. This paper extends
the notion to agents with more DOF. The work most similar to ours was performed by Williamson
(1996) and Marjanovi'c, Scassellati & Williamson (1996), who presented a controller for reaching
with a 6-DOF robot arm, based on the same biological evidence. The system used superposition to
interpolate between three reaching primitives, and one resting posture.
Another inspiration comes from psychophysical data describing what people fixate on when
observing human movement. Matari'c & Pomplun (1998) and Matari'c & Pomplun (1997) demonstrate
that when presented with videos of human finger, hand, and arm movements, observers focus
on the hand, yet when asked to imitate the movements, subjects are able to reconstruct complete
trajectories (even for unnatural movements involving multiple DOF) in spite of having attended to
the end-point. This could suggest some form of internal models of complete behaviors or primitives
for movement, which effectively encapsulate the details of low-level control. Given an appropriately
designed motor controller, tasks could be specified largely by end-point positions and a few addi-
4 M. Matari'c et al.
tional constraints, and the controller could generate the appropriate corresponding postures and
trajectories.
3. Adonis: The Dynamic Humanoid Torso Simulation
Our chosen implementation test bed, Adonis, is a rigid-body simulation of a human torso, with
static graphical legs (Figure 1), consisting of eight rigid links connected with revolute joints of one
and three DOF, totaling 20 DOF. The dynamic model for Adonis was created by using methods
described in Hodgins et al. (1995). Mass and moment-of-inertia information is generated from
the graphical body parts and human density estimates. Equations of motion are calculated using
a commercial solver, SD/Fast (Hollars, Rosenthal & Sherman 1991). The simulation acts under
gravity, accepts other external forces from the environment. No collision detection, with itself or
its environment, or joint limits are used in the described implementations; we have implemented
these extensions in subsequent work.
8 Rigid Body Sections
20 Degrees of Freedom
Wrist
Wrist
Waist
Y
Z
3 DOF Neck
8 Rigid Body Sections
20 Degrees of Freedom
Wrist
Wrist
Waist
Y
Z
3 DOF Neck
Figure
1. The Adonis dynamic simulation test bed consisting of eight rigid links connected with revolute joints of
one and three DOF, totaling 20 DOF.
Adonis is particularly well-suited for testing and comparing different motor control strategies;
the simulation is fairly stable and the static ground alleviates the need for explicit balance control.
In addition, virtual external forces may be applied to the end-points without explicit calculation
of the inverse kinematics (IK) of the arms. This, in turn, enables us to implement and evaluate
experimental controllers for human-like movement more easily, while having the simulation software
handle the issues of IK and dynamics. The next section introduces the task used to compare different
control approaches on Adonis.
4. Task Specification
Natural, goal-driven movement relies on precise specification and coordination, and realistic con-
straints. As a test task should be challenging to control but familiar enough to evaluate, we chose
KLUWER STYLE FILE 5
the Macarena, a popular dance which involves a sequence of coordinated movements that constitute
natural sub-tasks. We used a verbal description of the Macarena, found on the Web at
http://www.radiopro.com/macarena.html, and aimed at teaching people the dance. Omitting the
hip and whole-body sub-tasks at the end, the description is given in Table I.
Table
I. The 12 sub-tasks of the Macarena.
1. Extend right arm straight out, palm down
2. Extend left arm straight out, palm down
3. Rotate right hand (palm up)
4. Rotate left hand (palm up)
5. Touch right hand to top of your left shoulder
6. Touch left hand to top of your right shoulder
7. Touch right hand to the back of your head
8. Touch left hand to the back of your head
9. Touch right hand to the left side of your ribs
10. Touch left hand to the right side of your ribs
11. Move right hand to your right hip
12. Move left hand to your left hip
This description, given as a set of sub-tasks, was used directly as the formal specification of the
task. No task-level planning or sequencing was necessary because the order is provided
by the dance specification. It is interesting that the individual sub-tasks are not specified in a
consistent frame of reference. The first four deal with a defined posture of the whole arm, perhaps
best expressed in joint angles, while the rest define the hand position, and are thus better described
in an ego-centric Cartesian reference frame. As mentioned above (Section 2), people watching
movement do not appear to pay active attention to the whole arm, but rather focus on the hand.
However, hand position alone does not sufficiently constrain the rest of the arm, whose other joints
also require specification; thus a mixture of coordinate frames is needed. This type of heterogeneous
task specification is common in natural language descriptions, and control systems must satisfy each
of the different goals regardless of the underlying representation. To address the issue of controller
representation, we used the same Macarena specification to implement three different alternatives,
described next.
5. The Joint-Space PD-Servo Approach
Joint-space controllers command torques for all actuated joints in a manipulator, and have been used
successfully as low-level controllers to generate behaviors for a variety of systems (Pai 1990, Raibert
Hodgins 1991, Hodgins et al. 1995, van de Panne & Lamouret 1995). We implemented the
Macarena by calculating the torques for each joint as a function of angular position and velocity
errors between the feedback state and desired state, i.e., by using a hand-tuned PD-servo controller:
actual
where k is the stiffness of the joint, k d the damping, ' desired ;
' desired are the desired angles and
velocities for the joints, and ' actual ;
' actual are the actual angles and velocities.
To generate the Macarena controller, the desired angles used for the feedback error are interpolated
from hand-picked target postures. The postures are derived from the task specification, each
corresponding to one of the 12 sub-tasks enumerated in Section 4 above. Intermediate postures
between sub-tasks were used as via points to help guide the joint trajectories through difficult tran-
sitions. For example, a via point was needed for swinging the hands around the head to prevent
6 M. Matari'c et al.
a direct yet unacceptable path through the head. The incremental desired angles use a spline to
smoothly interpolate between the postures and via points. Gains for the PD-servo are chosen by
hand and remain constant through the whole Macarena.
The PD-servo approach allows direct control of each actuated joint in the system, giving the user
local control of the details of each behavior. However, the controller in turn requires a complete
set of desired angles at all times. Specifying that information can be tedious, especially for joints
such as the neck that are less important to the behavior being generated. Interpolating between
postures is a reasonable method for reducing the required amount of information. The control of
actuated joints may be individually modified using their respective desired angles, thus allowing
localized control over the generated motion. All desired postures are specified as a set of angles
in joint-space. In the Macarena, position constraints such as "hands behind the head", can be
satisfied with user-level feedback. However, precise Cartesian space constraints, like "finger on the
tip of the nose", would be difficult to control with hand-tuning using joint-space errors directly.
For these cases an inverse kinematics solver could be used to generate desired angles from position
constraints.
6. The Joint-Space Force-Field Approach
The second implemented controller we describe is a non-linear force-field approach based on the
recent work by Mussa-Ivaldi (1997), inspired by the biological data described in Section 2. In earlier
work, Mussa-Ivaldi & Giszter (1992) showed that a small number of force-field primitives could be
used to generate a wide range of force fields at the frog's foot. By combining the primitives using
superposition, the end-point of a simulated leg could be moved to different parts of the workspace.
However, the actual path taken by the leg under the influence of the field is not straight or natural
looking. Subsequently, Mussa-Ivaldi (1997) showed how combinations of primitives can be used to
move from one point to another in a straight line. In that work, the primitives were weighted using
step and pulse functions: steps to achieve a target position, and pulses to control the trajectory of
the motion.
To apply this approach to the Macarena task, stable joint-space potential fields with single
static equilibrium points are combined to generate control for each sub-task. These primitives are
combined with weighting functions such that step functions move the agent to its sub-task target
position and pulse functions dictate desired trajectories for the arm motion, such as moving the
hand to avoid the head.
Torque
non linear
linear
Figure
2. Graph showing the difference between the linear and non-linear joint-space controllers. The torque due to
the non-linear controllers drops off at high errors.
KLUWER STYLE FILE 7
Each primitive or force-field is specified as a torque-angle relationship at each joint of the arm:
where is the joint torque, and OE is a torque-angle relationship primitive depending on time, actual
angle ' actual and its derivative
' actual . A primitive OE i for a particular joint with stiffness k, damping
k d , and desired angle ' desired , is calculated as:
desired )e \Gammak(' actual \Gamma' desired
actual (3)
This defines a non-linear relationship, which is the derivative of a Gaussian potential centered at
' desired . The non-linear response of this controller is similar to a linear PD-servo for small errors
desired ). However, with large errors, the torque calculated by the primitive drops off
exponentially, as shown in Figure 2. Mussa-Ivaldi & Giszter (1992) suggest that this behavior is
consistent with biological muscle, and that the non-linearity of the controller increases the richness
of behavior that can be produced.
We specified each sub-task of the Macarena with two such non-linear primitives combined to
create the whole motion. The two primitives perform different tasks: the static position, defined
by a force-field OE i weighted by a step function ! i (t), and the path between sub-tasks, manipulated
using another force-field / i , itself weighted by a pulse function AE i (t):
actual
The step function is defined by:
which yields a smooth transition in the control corresponding to movement toward a particular
final posture defined by ' desired . The pulse function is defined by:
ae
which creates a smooth adjustment in the trajectory allowing separate control of the path taken in
the movement.
Our implementation differs from Mussa-Ivaldi (1997) in a number of ways. Mussa-Ivaldi uses a
set of arbitrarily chosen primitives, and solves a least squares optimization problem to determine the
sizes of the steps and pulses. Rather than select arbitrary primitives, we chose ours to correspond
to the positions of the arm at each sub-task, thus simplifying the weighting. This is a pragmatic
decision; it is unclear how well the optimization method scales from the 2 DOF system implemented
in the Mussa-Ivaldi paper, to the full 20 DOF Adonis simulation. Finally, in the Mussa-Ivaldi work
the primitives are defined as a Gaussian potential in the full joint-space, coupling the joints, while
in our implementation they are treated independently. 1
This force-field-based joint-space controller (heretofore referred to as the torque-field controller)
is similar to the PD-servo joint-space controller described in the previous section, in that they both
rely on torque-angle relationships at the joints to determine the arm motion. The main difference
(\Gammak
joints
actual \Gamma' desired
'actual
which couples the joints through the exponential term.
8 M. Matari'c et al.
actual
F
desired
Figure
3. Impedance Control: The virtual force F is computed by attaching a virtual spring and damper from the
hand position x to the desired position xe . The torques at the joints are then calculated to produce this desired force
at the end of the arm, and thus move it to the desired position.
is that the torque-field approach uses non-linear controllers at the joints, as opposed to the linear
PD-servos. This non-linearity allows the controller to simply switch set-points for a new task, rather
than interpolate as in the linear case, and to use pulse functions to manipulate the trajectory, rather
than define explicit via points.
7. The Cartesian Impedance Control Approach
In contrast to the first two, our third implemented controller acts in the Cartesian frame of reference,
which allows for a more intuitive interface for the user, as the Cartesian position of the hand is easier
to visualize than the angles of all the joints. The approach is based on the principle of impedance
control, introduced by Hogan (1985), has been applied to robot manipulation. The general principle
is to modulate the mechanical impedance of the end-point of an arm by altering the torques at the
arm's joints. Mechanical impedance for an object is defined as the relationship between an imposed
disturbance and a generated force. For example, a compressed spring exerts a force proportional
to the displacement. The impedance of such a system is constant and equal to the stiffness of
the spring. For a more complicated mechanism like a robot arm, the mechanical impedance is
determined by the control at the joint level. For example, a mechanical arm can be made to appear
as if a virtual spring and damper are connected to some equilibrium point; moving the point will
drag the arm around, and the arm will automatically return to its equilibrium position if disturbed.
Arranging the control of the arm in this way has advantages in terms of stability, especially when
interacting with different environments (Colgate & Hogan 1988).
Our impedance controller calculates the force F from the virtual spring and damper, as illustrated
in Figure 3, given by:
where x actual is the 6-D vector defining the position and orientation of the end-point (hand) in
space,
x actual is a vector of velocities, and x desired and
x desired are 6-D vectors of desired posi-
tions/orientations and velocities. K and B are stiffness and damping matrices. This desired force is
implemented by applying torques at the joints, which are calculated using the Jacobian J(' actual ),
using the following simple relation (Craig 1989):
KLUWER STYLE FILE 9
Applying this equation results in stable control of the position and orientation of the hand over the
workspace of the arms. However, it does not constrain the final orientation of the whole arm, or
prevent the arm from violating joint limits or moving through the body. To further constrain the
arm, a second impedance controller was added to control the elbow motion. This allows the positions
of the elbow and the hand to be moved, which is an intuitively sensible method of constraining the
arm motion. Experiments showed that the best way to control the elbow was to specify a desired
orientation for the upper arm, rather than specifying the elbow position. 2 The control is calculated
in a similar manner to Equation 8, although the Jacobian is defined for the transformations between
the elbow and 3D shoulder joint, and the force F is only due to desired rotations. Other terms
added to the impedance control include compensation for the effect of gravity on the links of the
arms (g(' actual )), and some extra damping at the shoulder joint (b shoulder ), making the final torque
applied to the joints:
hand F hand
elbow F elbow shoulder
To perform each sub-task of the Macarena, we specify the desired position and orientation of the
hand, and the desired orientation of the upper arm. The control scheme then calculates the torques
at the joints in order to move the arm to that position, and maintain it there. Low-level PD-servos, as
described previously, control the waist and neck. To move between sub-tasks, a linear interpolation
scheme is used to gradually shift the desired positions. As with the PD-servo controller, extra via
points are used to avoid collisions with the head.
The method has several advantages over position control techniques using inverse kinematics
(Baker & Wampler II 1988). It is computationally simple, requiring only the forward kinematics
and the Jacobian (Whitney 1982), and it is stable both when moving freely, and during contact
with surfaces (Hogan 1985). In addition, the general formulation of impedance control provides a
simple merging mechanism for different control strategies (Beccari & Stramigioli 1998).
The main difficulty encountered when implementing this scheme was finding a compact and
intuitive way to specify the orientations of the elbow and hand. The orientation of the hand was
specified using a single angle relative to the lower arm, while the orientation of the upper arm
was specified by aligning the x-axis of the segment with a desired vector. In addition, the scheme
produces straight-line motions of the hand which are not always the most natural. For example,
when moving the hand from straight out (sub-task 3) to touching the shoulder (sub-task 5), the most
natural motion is for the hand to come up and over, rather than moving directly in a straight line.
A curved solution is possible with this controller, but would require a more detailed specification
of the desired trajectory.
As an alternative to impedance control, the simulation system allows arbitrary forces to be
applied to the end-point of the arm. Thus a force could be calculated as in Equation 7, and directly
applied to the hand. A variant of this approach was experimented with, applying the following
force:
desired )jx actual \Gamma x desired j (10)
where v desired is the desired velocity, v actual is the actual velocity, x defined as above, and c is a gain
constant. For carefully chosen values of c, this controller has the effect of moving the hand to the
desired Cartesian position x desired . Although simpler to implement than the impedance controller,
this controller has a number of disadvantages. Since the force is only applied at the hand, high
damping has to be used to constrain the rest of the arm, which results in unnatural motion. The
2 This is due to the fact that under impedance control, the arm moved under the influence of the applied virtual
springs and dampers at the hand and elbow. The effect of two forces on the arm can be unintuitive for arbitrary
positioning of the set-points. Specifying the orientation of the upper arm, as well as the position and orientation of
the hand, makes the system much more predictable and easy to operate.
M. Matari'c et al.
Figure
4. An example of Adonis performing the Macarena, shown as a series of snap-shots, in this case using the
joint-space torque PD-servo controller.
impedance controller was also found to be less sensitive to singular configurations of the arms (such
as in sub-task 1, where the arm is straight). For these reasons, we chose not to use this final control
method for evaluation; for more details on this implementation, see Matari'c, Zordan & Mason
(1998b).
8. Performance Analysis and Comparisons
Analysis and evaluation of complex behavior is an open research challenge. As synthetic behaviors
for agents in animation, robotics, and AI become more complex, the issue of analysis becomes
increasingly acute. In this section, we explore several evaluation criteria, both qualitative and
quantitative, and make observations about differences between the different controllers performing
the same task, consistencies from task to task for a single controller, and similarities between human
and synthetic motion.
8.1. Naturalness of Movement: Qualitative
Judging the naturalness of movement is an important aspect of both robotic and animation eval-
uation, but aesthetic judgment is difficult to quantify. Qualitative judgments of motion require
real-time playbacks of recorded behaviors; for the three controllers we implemented, those are
available from: http://www-robotics.usc.edu/ agents/macarena.html
Figure
5 shows a time-lapse image for sub-task 10 with the goal of facilitating a qualitative
comparison of the arm trajectory generated by each of the three controllers. The impedance controller
is shown on the left, torque-field controller in the middle, and the PD-servo controller on the
right. While the beginning and end postures are very similar for all three, and all paths are valid
KLUWER STYLE FILE 11
Figure
5. A time-lapse image of sub-task 10, showing the trajectories the hand takes using the different controllers:
impedance on the left, torque-field in the middle, and PD-servo on the right.
in that they avoid body collisions and unnatural postures, the paths themselves vary significantly.
The motion generated by the PD-servo is smooth but contains an exaggerated curve, due to the
joint-space spline interpolation between the chosen via points. The torque-field movement is also
smooth, resulting from the Gaussian controllers. In contrast, the impedance controller motion is
more jerky because its set-point moves along straight lines.
Many differences between human movement and that of our simulated agents are due to the
underlying dynamics of our chosen test bed; the qualitative features caused by the limitations of
the dynamic simulation must be separated from those dictated by the underlying controller. Rigid
body simulation imposes limitations that cannot be overcome by control. For instance, Adonis's unactuated
spine necessarily appears stiff. Furthermore, dynamic simulation constrains motion to be
physically plausible but not necessarily natural. For example, since the simulation does not constrain
joint limits or avoid collisions, the controllers must handle these limitations directly. Because the
controllers have no knowledge of body boundaries, avoiding self-collisions was accomplished through
the user's choice of desired positions and/or angles, resulting in conservative, unnatural trajectories.
This can be improved with direct collision prediction and avoidance, as well as by built-in joint
limits. In contrast to limitations caused by the simulation, some qualitative differences are caused
by the controllers directly. For example, the joint-space torque method interpolated postures with
splines to smooth the resulting motion. It also included small head and hand movements that
produce more natural appearance for the overall motion.
Qualitative differences between controllers are often aesthetic, and thus difficult to quantify.
Some metrics, such as comfort, can be applied, but even those vary under different dynamics
and involve some observer/performer bias. To avoid this problem, the next section addresses two
approaches to a more quantitative evaluation of the controllers.
8.2. Naturalness of Movement: Quantitative
The whole arm path, analyzed qualitatively in the previous section, is still too complex to easily
compare in a quantitative fashion without introducing external metrics. To focus, we consider only
the end-effector motion, particularly the velocity and jerk of the dominant or active hand during
individual sub-tasks. As a base-case or control in this analysis, we use hand positions recorded from
a human performing the Macarena.
M. Matari'c et al.
8.2.1. Comparison of End-Effector Speed
subtask 21.03.05.0
hand
speed
(m/s)
human
torque-field
PD-servo
impedance
subtask 81.03.0hand
speed
(m/s)
subtask 101.03.05.07.0time
time
Figure
6. A comparison of the hand velocity profiles in four sub-tasks: sub-task 2 (extending the arm to straight
(moving from straight out to touching the shoulder), sub-task 8 (moving from shoulder to the back
of the head), and sub-task 10 (moving from the back of the head to the ribs), and human data.
Hand position data of a person performing the Macarena were recorded with a commercial Flock
of Birds electro-magnetic tracking system and used to compute the hand velocities. These are
compared to the velocities of the three controllers we implemented; Figure 6 shows the velocities
for the analyzed controllers and for a human performing the dance.
To evaluate an individual controller performing a given sub-task, we consider the overall shape
and smoothness of the velocity profile as well as the peak speed. Since the human motion data was
recorded at fairly low variable sample rates (about 5 samples/sec), it produces stair-step velocity
profiles; we assume the effect would be smoothed with higher frequency samples. An analysis of
peak velocities shows that the joint-space PD-servo torque controller generated unnaturally fast
hand movements while the other two controllers more closely matched the human peak speeds. In
contrast, the same controller generated the smoothest and most symmetric hand profiles; natural
human movement has been categorized as having such symmetric properties (Morasso 1981, Atkeson
Hollerbach 1985). Furthermore, in the movements not requiring collision avoidance (sub-tasks 2
and 6), the impedance controller produced motion that closely matches the shape of the human
velocity profile.
Differences in hand movements from task to task indicate how a controller performs over a
variety of sub-tasks and suggest the potential generality of that controller for use in new tasks.
Task variability exercises the controller by forcing it to perform in a variety of conditions. Notably,
sub-tasks 8 and 10 require more sophisticated paths in order to avoid head/arm collisions. The PD-
servo and impedance controllers use via points to avoid this collision. The effect of these postures
can be seen most dramatically in the speed profile for sub-task 8, noting the change in speed
corresponding to the posture change at about 0.5 seconds. However, the torque-field controller uses
KLUWER STYLE FILE 13
an initial pulse to control the overall trajectory and it remains more consistent across these tasks.
Although the via points help achieve the goal of collision avoidance, the resulting velocity profiles
indicate the need for a more sophisticated approach.
8.2.2. Comparison of End-Effector Jerk
Minimal jerk of hand position has been proposed by Flash & Hogan (1985) as a metric for describing
human arm movements in the plane. Inspired by their work in planar motion, we propose a 3D
evaluation metric, according to the following index:
where @ 3 x=@t 3 is the third derivative of x, y and z positions with respect to time. We chose jerk as
an evaluation metric over other measures such as minimum torque change (Uno, Kawato & Suzuki
or energy (Nelson 1983), because it is much easier to record from a human subject and is
also a good measure of smoothness.10100000
human
impedance
PD-servo
torque-field
square
jerk
sub-tasks
Figure
7. A comparison of the jerk values for the different controllers (PD-servo, torque-field, and impedance), and
for human data. The lines connecting the data points do not correspond to actual data, since the sub-tasks are
calculated independently, and map to left and right hand movements.
The calculated jerk values of the three different controllers and the human data are shown
in the graph (Figure 7) corresponding to the square jerk for the active hand (e.g., in sub-task
1 the right arm, in sub-task 2 the left arm, and so on) over the length of the task. We do not
expect a correspondence between the controllers and the human jerk values, but instead focus on
trends across sub-tasks. As expected, movements that involve collision avoidance with the head
(i.e., sub-tasks 7 through 10) have high jerk values overall, reflecting their complexity. Since jerk is
based solely on Cartesian movement, it is low for movements that are primarily specified by joint
constraints (i.e., sub-tasks 3 and 4 which command "turn the hand palm up"). Finally, low jerk
also results from movements over short distances between Cartesian goals (i.e., sub-tasks 11 and
12, moving the hand from one hip to the other).
Jerk is a sensitive measure that varies strongly from task to task and from controller to con-
thus the log scale. Furthermore, the motion-capture system used to gather human data
14 M. Matari'c et al.
can suffer from marker slippage, adding further noise into the evaluation. We made no effort to
create correspondence between the paths taken by the human and the different controllers, and
thus variability in arm path is unaccounted for. Finally, timing has an effect on the jerk; slower
movements have less jerk than faster ones. The movements shown do not all have the same timing
and, although we tried various methods to normalize according to the timing, the data shown do
not account for these differences explicitly, i.e., are not normalized. Therefore, the exact values in
this graph are less reliable than the general trends they indicate, and it is remarkable to see the
obvious correlations between the different data sets.
8.3. Controller Use and Flexibility
In addition to evaluating the success of the controllers in creating a life-like Macarena, we have
also evaluated the controllers from the user's point of view. In this section we consider issues such
as the amount of information required by each controller, the ease with which that information is
input to the simulation, the simplicity with which the final motion is tuned for the various cases,
and the actual computational complexity of the controllers themselves.
Once the gains and other constants have been fixed, there is not a great difference in the amount
of information required by the three different controllers. The torque-field controller has the lowest
overhead, requiring 14 values per arm per sub-task (7 for the step function, and 7 for the pulse).
The PD-servo controller requires only 7 values per arm, but these need to be input at every time-step
of the simulation, thus calling for an extra interpolation routine. The impedance controller
also requires 7 values, including the hand position, orientation and the elbow orientation; like the
PD-servo, it also uses an interpolation routine.
Rather more important than the number of parameters needed to specify a particular position is
the ease with which that information is determined. For the PD-servo and torque-field controllers,
this information is input in joint-space, so the user needs to solve the inverse kinematics of the arm
manually, usually by trying different angles and adjusting. This is straight-forward if a little tedious,
due to the fact that the joints are in an articulated chain, making the effect of any one joint on the
arm motion dependent on the angles of all the others. The impedance controller works in Cartesian
space, which makes the specification of hand positions much easier. Specifying the orientations of
the elbow and hand is slightly more difficult, however, mainly due to the awkwardness of specifying
three-dimensional rotations. This illustrates the fundamental tradeoff between the two types of
control; the joint-space controllers are awkward to use but have explicit control over all the joints,
while the Cartesian space controller is easier to use, but has less control over the individual degrees
of freedom.
A third factor is the influence of the dynamics of the arm. While dancing the Macarena, the
arm is moving quickly enough for dynamics to be important, making the choice of set-points,
and particularly via points, quite important. For the torque-field controller, the pulse torque-field
requires hand-tuning to create the motion, while for the other controllers, the positions of the via
points requires hand-tuning. Since the motion of the arm is not wholly determined by the positions
of these points, it is difficult to map from an error in the arm path to changes in a specific parameter.
This difficulty is apparent in both reference frames, for the same reasons as described previously.
A final evaluation can be made in terms of the complexity of the implementation. The most
computationally simple controller is the PD-servo method, followed closely by the torque-field
controller. The impedance controller is considerably more complex, requiring a 7-by-6 and a 7-
Jacobian matrix to be calculated at each time-step, as well as numerous vector operations
for gravity compensation. However, this is still considerably less complex than any explicit inverse
kinematics algorithm. The increased complexity of the impedance controller presents a trade-off in
return for the ease of specifying positions in Cartesian space.
KLUWER STYLE FILE 15
9. Continuing Work: The Combination Approach
The three controller implementations we presented all involve unavoidable tradeoffs, because each
uses only a single, consistent approach to generating movement. However, different reference frames
appear even in the simplest task specifications, resulting in unnatural and challenging transformations
between the specification and the implementation. From the stand-point of the user, as well as
the appearance of the final synthesized behavior, it would be preferable to have a means of flexibly
combining the different control alternatives, so as to always utilize the approach most suited for a
given task or sub-task. We are currently working on developing just such an approach to control.
Our approach is implemented within the behavior-based framework (Matari'c 1997, Brooks 1991),
which uses behaviors as abstractions for encapsulating low-level control details within each prim-
itive. Consequently, we can implement generic primitives such as get-posture and go-to-point and
parameterize them with the specific goals of each sub-task, as it is assigned. One of the benefits of
the behavior decomposition is not only that there are different ways of structuring a given system
(i.e., different types of controllers), but also that once a behavior decomposition is achieved, the specific
behavior controllers can themselves vary, depending on the available sensors and effectors. For
example, get-posture can be implemented with different types of joint-space controllers, and, anal-
ogously, go-to-point can use different Cartesian controllers, if desired. Furthermore, other behavior
types can be added, such as an oscillator-based primitives for movements such as bouncing, waving,
swinging, etc (Williamson 1998).
In an early demonstration of this approach, Matari'c, Williamson, Demiris & Mohan (1998a)
employed the notion of different types of motor primitives as behaviors to generate the same
Macarena sub-tasks. There, the sub-tasks were assigned different types of controllers: PD-servo
joint-space control for posture-related sub-tasks (such as sub-tasks 1 through 4), and impedance
Cartesian control, for extrinsic or body-centered movements (such as sub-tasks 5 through 12). Our
implementation executed each sub-task sequentially, thus eliminating interference between the different
controllers. Besides sequencing, however, behaviors/primitives can also be co-activated, i.e.,
executed in parallel. For example, our implementation included an avoid-collisions primitive executed
concurrently with any get-posture or go-to-point primitive, in order to generate safe, collision-free
movement. Concurrent behavior combination is more complex than sequencing, however, and
requires consistent output representations between the controllers being combined (Matari'c 1997).
Using different types of primitives assumes that either the user or some intelligent automated
method can subdivide the overall task into sub-tasks, and assign those to the most appropriate types
of behaviors/primitives. We believe that these are not unreasonable assumptions. Human-generated
specifications are typically sequential and presented in a step-wise fashion. Sub-task breaks can also
be generated directly from observing movement, such as for example using zero-velocity breaks
for each end-point. Automatically assigning sub-tasks to primitives is more complex; it could be
coarsely approximated using parsing and key-word search of the textual task specification, which
provides strong hints in the form of references body parts and joints.
In such a combination control system, individual behaviors may utilize different representations,
coordinate frames, and underlying computation, but their use and performance can be seamlessly
integrated by sequencing and co-activation. An effective means of encapsulating generic behaviors
would also allow the integration of control schemes from different users. As complex articulated
agents become more prevalent, such a modular approach to control could use its "open architecture"
to combine the advantages of various successful approaches.
M. Matari'c et al.
10. Conclusion
We have compared a set of three approaches for control of anthropomorphic agents, including PD-
servo control, torque-field control, and impedance control, implemented on the same dynamic torso
simulation, Adonis, and tested on the same Macarena sub-tasks. We compared the three controllers
against one another and against human data, using qualitative and quantitative metrics, including
naturalness of appearance, hand velocity and jerk, and controller use and flexibility.
To facilitate a realistic comparison, the controllers and the human data were generated indepen-
dently. However, various techniques can be implemented to generate a closer fit between the data,
if that is desired. Specifically, human hand positions could be used to select goal positions for the
impedance controller. Similarly, an IK solver could be used to compute postures for the joint-space
controllers that achieve these hand positions. Timing taken from human motion could be used
to generate simulated motion that more closely fits the human performance. Lastly, minimization
techniques could be applied to the controller parameters to find movements that minimize jerk
and/or match other performance metrics.
The fundamental tradeoff between believability and control effort still remains, as the approaches
produce different results depending on sub-task specification. In order to address these tradeoffs,
we proposed a combination framework which allows different types of movement primitives (under
different reference frames and representations) to be used for different types of sub-tasks, in order
to maximize the match between the description of the task and the controller that achieves it.
Acknowledgments
This work is supported by the NSF Career Grant IRI-9624237 to M. Matari'c. The authors thank
Nancy Pollard for help with the jerk calculations, Stefan Schaal and Jessica Hodgins for sharing
expertise and providing many insightful comments. The Adonis simulation was developed by Jessica
Hodgins at Georgia Institute of Technology.
--R
Making Them Move: Mechanics
Impedance Control as Merging Mechanism for a Behaviour-Based Architecture
Robot Motion: Planning and Control
Intelligence Without Reason
Interactive Spacetime Control for Animation
Introduction to Robotics: Mechanics and Control
'Convergent force fields organized in the frog's spinal cord'
SD/Fast User's Manual
What do People Look at When Watching Human Movement?
Nonlinear force Fields: A Distributed System of Control Primitives for Representing and Learning Movements
Spacetime Constraints Revisited
Programming Anthropoid Walking: Control and Simulation
Robot Manipulators: Mathematics
Animation of Dynamic Legged Locomotion
Active Stiffness Control of a Manipulator in Cartesian Coordinates
Learning from demonstration
Applied nonlinear control
Guided Optimization for Balanced Locomotion
The mathematics of coordinated control of prosthetic arms and manipulators
Postural Primitives: Interactive Behavior for a Humanoid Robot Arm
Rhythmic robot control using oscillators
Spacetime Constraints
--TR
--CTR
Maja J. Mataric, Getting Humanoids to Move and Imitate, IEEE Intelligent Systems, v.15 n.4, p.18-24, July 2000
Z. M. Ruttkay , D. Reidsma , A. Nijholt, Human computing, virtual humans and artificial imperfection, Proceedings of the 8th international conference on Multimodal interfaces, November 02-04, 2006, Banff, Alberta, Canada
Michael Neff , Eugene Fiume, Modeling tension and relaxation for computer animation, Proceedings of the 2002 ACM SIGGRAPH/Eurographics symposium on Computer animation, July 21-22, 2002, San Antonio, Texas
Ajo Fod , Maja J. Matari , Odest Chadwicke Jenkins, Automated Derivation of Primitives for Movement Classification, Autonomous Robots, v.12 n.1, p.39-54, January 2002
Aude Billard , Maja J. Matari, A biologically inspired robotic model for learning by imitation, Proceedings of the fourth international conference on Autonomous agents, p.373-380, June 03-07, 2000, Barcelona, Spain
Maja J. Mataric, Sensory-motor primitives as a basis for imitation: linking perception to action and biology to robotics, Imitation in animals and artifacts, MIT Press, Cambridge, MA, 2002
David A. Forsyth , Okan Arikan , Leslie Ikemoto , James O'Brien , Deva Ramanan, Computational studies of human motion: part 1, tracking and motion synthesis, Foundations and Trends in Computer Graphics and Vision, v.1 n.2, p.77-254, July 2006 | animation;motor control;robotics;articulated agent control |
608634 | Defining Open Software Architectures for Customized Remote Execution of Web Agents. | Agent-based solutions promise to ameliorate Web services, by promoting the modular construction of Web servers, relieving the network from transferring useless data, supporting user mobility, etc. However, existing Web servers do not favor the hosting of agents. This paper proposes a description of agent behavior in terms of its requirements regarding resource utilization (e.g. memory, and disk space), functional services (e.g. system calls), and non-functional properties (e.g. degree of replication, and access control). When formally expressed, these requirements can be used in an automated decision process, which is based on software specification matching techniques. Upon the acceptance of an agent, the host uses these requirements to construct an environment customized to agent's execution. We discuss the benefits of this approach, and how it can be used to promote existing agent-based solutions in the Web framework. | Introduction
Since its appearance in the early 90's, Web has been greatly evolved.
While it was initially intended for the transfer of hypertext documents,
its success and world-wide acceptance led the focus of research interest
into other directions like distributed computing, electronic commerce,
etc. However, from the perspective of intensive use, Web's architecture
was not designed for such employment. Consequently, today Web is
confronted with a number of problems ranging from server saturation
and heavy network tra-c, to access control and verication of client
requests. One of the most prominent solutions for coping with those
problems is the use of software agents, a concept introduced to distributed
systems from the eld of articial intelligence [1]. In common
sense, an agent can be anyone who acts on behalf or in the interest
of somebody else. Consequently, a software agent is a piece of code
endowed with capabilities that allow it to perform some task in the
place of a person or some other piece of code. In the context of this
document we use the term agent to signify an autonomous piece of code
c
1999 Kluwer Academic Publishers. Printed in the Netherlands.
V. Issarny and T. Saridakis
with mobile characteristics, that can be executed independently from
its originator.
Agents permit computations to be executed close to the data, which
results in a decrease of both the network bandwidth utilization, and
the response time. By relaxing the restrictions of permanent location
of computations, they promote distribution of workload among the
components of a distributed system. Finally, agents support the construction
of modular servers capable of adapting to a large variety of
client requests. On the other hand, agents raise major problems on
their potential hosts. Such problems are related to host's capabilities
to provide all the indispensable primitives for supporting an agent's
execution, and to the compatibility of agent's requirements with host's
policies. These execution primitives and policies are the execution properties
that fully describe the agent's behavior in the host environment.
Accepting an agent without supporting all required execution properties
might not be acceptable for the agent's originator, while accepting
an agent whose execution properties are in con
ict with host's policies
might not be acceptable for the host. Hence, before accepting an agent,
the host system should be informed of agent's properties, analyze them
and decide whether or not the given agent can be properly hosted.
This paper presents a framework, in which a host decides on the
acceptance of an agent based on the explicit description of agent's
execution properties. Based on means provided by the software architecture
eld, we describe an agent's execution properties as an
open architecture, which abstractly but unambiguously characterizes
the host environment expected by the agent. Using this architectural
description as the agent's specication, the host is able to detect conicts
with its own policies and to decide whether or not to accept the
agent. In case of agent acceptance, the host also has the information to
customized the execution environment to meet agent's requirements.
The remainder of the paper is structured as follows: next section
gives an overview of the results in the eld of software architecture,
which we use as a base ground for our proposal. Section 3 addresses
the description of open software architectures so as to characterize Web
agents with respect to the execution properties that are expected from
the host environment. The instantiation of a host environment customized
to a given agent, is further addressed in Section 4. Related work
is discussed in Section 5, and we conclude in Section 6 by summarizing
our contribution.
Customized Remote Execution of Web Agents 3
2. Software Architecture
Research in the software architecture domain aims at reducing costs
of developing complex software systems [18, 21]. Towards that goal,
formal notations are being provided to describe software architectures,
replacing their usual informal description in terms of box-and-line di-
agrams. These notations are generically referred to as Architecture
Description Languages (Adls). An Adl allows the developer to describe
the gross organization of a system in terms of coarse-grained
architectural elements, abstracting away their implementation details.
Prominent elements of a software architecture are subdivided into the
following categories:
Components, that characterize a unit of computation or a data
store.
Connectors, that characterize a unit of interaction.
Congurations, that describe a specic, possibly generic, software
architecture through the composition of a set of components via
connectors.
Existing Adls dier depending on the software architecture aspects
to which they are targeted. We identify at least two research di-
rections, which dene dierent Adls: (i) the architecture analysis,
which provides the formal specication of an architecture's behavior
(e.g. [14, 10]), and (ii) the architecture implementation, which delivers
the implementation of an application its architectural description
(e.g. [20, 8]). Both research directions instigate the design and implementation
of CASE tools, based on technologies such as model
checking, theorem proving, and type checking [12].
2.1. Example
To illustrate the use of Adls, we take as an example a primitive Distributed
File System (Dfs). The Dfs is composed of a client interacting
with a (possibly distributed) le server for performing le accesses.
We further assume that the interaction protocol between components
is Rpc-like. Using a simplied version of the Adl used in the Aster
project 1 , the Dfs architecture is given as a conguration made of the
client and le server components, which are bound together using a connector
describing an Rpc protocol. Figure 1 contains the corresponding
declarations.
4 V. Issarny and T. Saridakis
component client =
port
client (typeFormat format);
server (typeFormat format);
functional
typeInt close (typeDsc fd);
interaction
open, close, read, write: client;
key, state: server;
non-functional
state: CheckPoint;
component
port
server (typeFormat format);
functional
typeInt close (typeDsc fd);
read (typeDsc fd,
interaction
open, close, read, write: server;
non-functional
read, write: FailureAtomicity;
connector
role
client (typeFormat format);
server (typeFormat format);
port
client (typeFormat format);
functional
interaction
state: client;
non-functional
Secure, Reliable;
components
C: client;
connectors
com: RPC;
binding
functional
C.open: FS.open; C.close: FS.close;
C.read: FS.read; C.write: FS.write;
com.key:
com.state: C.state;
interaction
C.client: com.client;
com.server:
Figure
1. The architectural description of the Dfs.
Customized Remote Execution of Web Agents 5
The architectural elements described in Figure 1 dene the gross organization
of the Dfs in an abstract manner, and give the associated
execution properties. More precisely, the execution properties of the
Dfs are subdivided into:
Functional properties, which dene the operations that are provided
and called by the architectural elements. The fact that a
functional property is either provided or called by the element
declaring it, is given by the semantics of the port through which
the corresponding interaction is performed. For instance, the client
component calls the open() operation of the leServer component.
Non-functional properties, which characterize the resource management
policies that are provided by the architectural elements.
For instance, the state() and key() operations of the client component
provide the CheckPoint and Authentication properties
respectively; the RPC connector provides Reliable and Secure
communications for all the interactions using it.
Interaction properties, which characterize the communication protocols
that are used for performing the interactions among com-
ponents. In the Dfs example, interactions are achieved using an
Rpc protocol.
Until this point, we have considered the declaration of execution properties
in terms of operation signatures and \names" associated to the
declared operations (e.g. key: Authentication). The type-checking process
typically employed in distributed programming environments to
verify the correctness of bindings among declared operations is based on
pattern matching. However, using pattern matching techniques to verify
the satisfaction of the execution properties requirements placed on
bindings relies on an informal description of the execution properties.
Obviously, we cannot rely on pattern matching performed on \names"
describing the execution properties (e.g. Authentication, RPC, etc), to
verify the correctness of bindings; a slightly dierent interpretation of
the same \name" by each of the interacting sides may cause communication
problems di-cult to track down and resolve. To overcome this
problem, we associate to a \name" a set of formal specications that
serve as its denition. This allows both clean interface declarations
and execution properties requirements resolution based on specica-
tion matching. The next subsection gives an overview of work done
in the area of formally specifying execution properties. For the sake
of conciseness, we do not give examples of formal specications in the
following. The interested reader may refer to [7] for more details on this
topic.
6 V. Issarny and T. Saridakis
2.2. Formal specifications of execution properties
Formal specication of functional properties amounts to specifying the
behavior of the operations required and provided by the architectural
elements. the behavior of operations is specied in terms of pre- and
post-conditions using Hoare's logic (e.g. [17, 23]). In addition to the
straightforward benet of formally specifying functional properties for
verifying the correctness of component interconnection, it further favors
software reuse and evolution. A component may be retrieved
from a component database using the component specication. The
correctness of component substitution within a conguration can be
checked with respect to the specications of involved components.
The aforementioned verications lie in the denition of relations over
specications. These relations dene, in terms of specication match-
ing, correctness conditions for software interconnection, re-use, and
substitution [17, 13, 23].
A non-functional property characterizes a resource management policy
that is implemented by the underlying execution platform. Similar
to functional properties, non-functional ones are specied in terms
of rst order logic (e.g. [9]). These specications refer to operations
that are not explicitly stated in the conguration description of a
software system, but which are provided by the execution platform
or the middleware in a way transparent to the software system. This
allows us to specify any non-functional property in terms of a unique
predicate instead of pre- and post-conditions. Practically, the formal
specication of non-functional properties provide for the verication,
with respect to the declared non-functional properties, of the correctness
regarding the bindings among components. In addition, it enables
the systematic customization of middleware with respect to properties
required by the architectural elements (e.g. [8, 24]). Brie
y stated, middleware
components providing non-functional properties are retrieved
in a systematic way through specication matching between required
and provided properties. The retrieved components are integrated with
the application components using base connectors.
Formal specication of interaction properties has been examined in
[2], where a Csp-like notation is introduced for the formal specication
of the behaviors of components and connectors with respect to their
communication patterns. This allows the correctness verication of a
conguration with respect to the communication protocols that are
used, using the notion of renement given in Csp. In that framework,
a component description embeds a set of port processes that are the
component interaction points, and a coordination process dening the
coordination among ports. The behavior of a component is described
Customized Remote Execution of Web Agents 7
by the parallel composition of port and coordination processes. Simi-
larly, a connector is dened in terms of a set of role processes, which
realize communications among components, and a coordination process
that species the coordination among roles. The behavior of a connector
is described by the parallel composition of role and coordination
processes.
3. Web Agent in an Open Software Architecture
A Web agent denes a software component, which interacts with
components from the host environment. Each of these interactions is
characterized by the functional, non-functional, and interaction properties
associated to it. Thus, a software agent can be specied inside an
open software architecture, which denes a conguration made of the
agent component and the set of open components and connectors that
must be provided by the host environment. Then, a host may safely
accept and execute an agent if the former's capabilities cover the latter's
requirements agent's requirements. The agent's requirements on the
host environment can be precisely dened by providing the following
information for the elements of the agent's open architecture.
The agent component denes:
the functional properties it expects from the host environment
together with the associated interaction properties, and
the functional, interaction, and non-functional properties it
provides.
Each connector characterizes the interaction and non-functional
properties that are to be made available by the host environment
for communication among the agent and the host's components.
Each open component characterizes a software component with
which the agent is willing to interact.
In other words, the agent component abstractly denes the agent's
behavior by exposing how it interfaces with the host. The open components
and connectors provide a description of the execution properties
that should be provided by the host environment.
8 V. Issarny and T. Saridakis
agent component
{ Same as client in Figure 1 {
open component hostFileServer =
{ Same as leServer in Figure 1 {
connector
{ Same as RPC in Figure 1 {
components agent: X; FS: hostFileServer;
connectors Svce: service;
binding { Same as DFS binding in Figure 1 {
Figure
2. Description of an agent accessing a le server in the host environment.
3.1. Example
For illustration, let us give the denition of an open architecture for
an agent that accesses a le server in the host system. Figure 2 gives
a description of the resulting architecture, which is close to the Dfs
architecture discussed in Subsection 2.1 except from the client component
that now corresponds to the agent. Given this description of the
open architecture, the host can safely accept and execute the agent, if
the host is able to instantiate the open hostFileServer component and
the service connector, without violating its own security policies.
3.2. Interpreting agent specifications
So far we have been arguing that architectural descriptions like the
one given in Figure 2 provide su-cient information about an agent's
requirements and a host's guarantees. Based on this information, one
can decide whether an agent conforms with some host's policies, and
whether a host can support all the requirements of some agent. To make
practical use of this approach, we need to associate agents and hosts
with interfaces that describe their execution properties. In addition,
hosts must be provided with a framework that allows them to interpret
such interfaces. This framework should support the analysis of agent's
execution properties, and the reasoning on their combination with
host's policies (e.g. see [3] for a study on the combination of properties
describing security policies).
Customized Remote Execution of Web Agents 9
Ideally, all three types of execution properties (i.e. functional, non-
functional, and interaction properties) should be described using formal
specications. However, common practice has shown that informal
specications provide su-cient guarantees for correct reasoning on
functional properties, like in the case of Omg's Object Transaction Service
(see chapter 16 in [6]). In a similar manner, informal specication
has been shown su-cient for interaction properties, due to the well-known
and widely accepted interpretations of common communication
protocols. In contrast to the above, formal specication is necessary
for non-functional properties, since there does not exist a precise common
understanding of what they represent. Consider for example the
case of an Rpc system: designers have a common interpretation of
the client-server interaction, but the interpretation of the associated
non-functional properties, like the at-most-once failure semantics often
dier.
Given the architectural description that serves as the specication
of an agent, the analysis and reasoning that should be performed by a
host prior to the agent's acceptance, are formally dened as follows:
8C a 2 O(Agent):
MatchPorts(P (C a ); P (C h
8I a 2 I(Agent):
MatchFunc(I a ; I h
MatchNonFunc(I a ; I h
MatchInter(I a ; I h )
The functions and the symbols used in the above expressions, are
dened as follows:
O(Agent) and C(Host, Agent) denote respectively the open components
declared by Agent, and the components available by Host
to Agent, based on trust issues (i.e. the origin of the agent and the
associated level of trust).
I(Agent) and I(Host, Agent) denote respectively the connectors
declared by Agent, and the connectors available by Host to Agent,
based on trust issues.
denotes the set of ports dened in C.
evaluates to true if the set of ports P (which
are requested by the agent) is a subset of the set of ports P 0 (which
are provided by the host). This function can be implemented using
pattern matching techniques.
V. Issarny and T. Saridakis
evaluates to true if the functional properties
of C 0 match those of C. This function can be implemented using
pattern matching, if functional properties are specied in terms
of operation signatures. Otherwise, if functional properties are
formally specied, this function can rely on a theorem prover.
evaluates to true if the non-functional properties
declared by C 0 match those declared by C. This function
should be implemented using a theorem prover, since we have
argued that specication matching of non-functional properties is
mandatory.
MatchInter(I, I 0 ) evaluates to true if for each interaction property
in I there is at least one interaction property in I 0 that matches it.
Similarly to the functional properties, this function can be implemented
using pattern matching. However, for greater robustness
and
exibility formal specications of interaction properties should
be employed. For instance, this may be achieved using a Csp-based
process algebra as proposed in [2]. Then, interaction properties
match if the processes declared in I 0 rene those in I. Let us remark
here that the matching function may be automated using a tool
like Fdr [5].
4. Instantiating Customized Hosts for Web Agents
Given an agent's specication, we have seen how the host environment
can analyze the agent's requirements and reason on their compatibility
with host's policies. In this section we report on the work we have
carried out on using the outcome of this analysis to instantiate an
execution environment customized to the agent's needs. The instantiation
process is based on searching and retrieving host's architectural
elements that match the open components and connectors declared
by an agent. In addition to the execution properties that have been
mentioned thus far, we explicitly take under consideration the usage
of host resources requested by an agent, so as to guarantee that the
agent will be able to execute to completion [19]. For that purpose,
we include an additional clause, named resource, in the conguration
description, in which the resources required for the agent's execution
are declared. Figure 3 gives an example of an explicit declaration of
agent requirements concerning host's resources.
Customized Remote Execution of Web Agents 11
components
agent: X;
connectors
Svce: service;
binding
{ Same as DFS binding in Figure 1 {
resource
Figure
3. Explicit declaration of agent's requirements regarding host's resources.
4.1. Customizing the host environment
Upon the reception of an agent specication, the host rst evaluates
whether it is able to execute the agent based on the agent's originator
and on available architectural elements and resources. If so, the host
noties the agent's originator, which then sends the agent's code. If
the host does not accept the agent, the reason of rejection is sent to
agent's originator. Possible reasons include: insu-cient level of trust
for performing the requested operations, unavailability of architectural
elements, and unavailability of some resources. The agent's originator
may then revise its initial requirements by modifying the open software
architecture constraints, and make a new request to the host.
Considering the agent's specication, instead of treating it as a hostile
entity facilitates its acceptance by the host. However, until now,
we have not considered issues related to the safety of the host, which
in the Web community is considered much more important than the
acceptance of an agent. By accepting an agent relying on the declared
properties, we risk to accept an agent that actually exhibits a dierent
behavior than the one it declares. In that case, the agent may cause
damages to the execution environment. Obviously, for our approach to
be viable, situations like the above should never raise. To assure this,
the execution environment for a given agent is built by the host according
to the agent's specication, which describe the exact interaction
points with the host and their properties. Except from the allocated
resources (i.e. private memory and disk space) which the agent can
access without any restrictions, the only way for an agent to access
the host is to pass through the declared bindings conforming to the
associated execution properties. Hence, the execution environment is
V. Issarny and T. Saridakis
safe for the host since it does not allow the agent to perform any actions
other than those declared in its interface.
Although the host guarantees the correct execution of an agent
whose behavior conforms with the one declared in its interface, no
guarantees exist for the execution of agents whose behaviors deviate
from the declared ones. In some cases the customized environment
may provide \stronger" execution properties than those requested by
an agent. this may occur under two conditions: (i) if the host does
not possess a component providing exactly the property requested by
the agent, but it does possess a component providing a \stronger"
property, and (ii) if the agent is still accepted by the host when the
stronger property replaces the originally requested one in the agent's
specication. Such cases include the allocation of a bigger portion of
resources than the one required, the support for 32-bit encryption keys
while only 16-bit keys were requested, the use of a component that
implements failure atomicity and retry-on-error while only the rst was
requested, etc. Hence, an agent that requested 15.5KB of memory but
actually uses 16KB may nally execute to completion, although this is
not a priori guaranteed.
4.2. Prototype implementation
To experiment the practicality of our approach, we implemented a
prototype for agents written in the Java programming language. The
prototype is a client-server system, where clients contact the server
through Cgi to request the remote execution of one of their agents.
The client-server interaction is decomposed as follows:
The client rst sends the agent's specication to the server, and
waits for approval or rejection notication from the server.
Upon reception of an agent's specication, the server checks
whether it is able to host the agent based on available architectural
elements and resources, as presented in the previous subsection.
Architectural elements available on the host are stored in a software
repository, which is organized so as to be able to identify the
subset of elements that can safely be made available to an agent
with respect to its originator. In the current prototype, we distinguish
between two kinds of agents: those originated from the same
Intranet to which the server belongs, and those originate outside
this Intranet. Availability of architectural elements simply relies on
pattern matching for functional and interaction properties. On the
other hand, checks regarding non-functional properties are done
using specication matching and relies on the tool we developed for
Customized Remote Execution of Web Agents 13
middleware customization [8]. If the server can safely execute the
agent, it computes a unique key for it and reserves the requested
resources; the key is sent to the client together with the acceptance
notication. If the agent cannot be run, the reason for rejection is
notied to the client.
Once the client receives the notication message from the server, it
checks whether the agent has been accepted or not. In the former
case, it sends the agent's code to the server together with the
associated key. In the latter case, the client re-issues the requests
later on, if the reason of rejection is the temporary resource un-
availability. The client may issue a new agent specication if the
reason of rejection is some other execution property.
Upon the reception of the agent's code, the server checks the
agent's identity using the associated key. Once the agent is authen-
ticated, it runs in the customized execution environment, which
only allows the agent to behave in the way that was described in its
specication. In addition, the customized execution environment
provides a private address space for the agent's execution, in order
to conne the consequences of an agent's crash on the failed agent
alone. To assure that an agent respects the execution properties
it has declared in its interface, we have used the SecurityManager
Java class to build the AgentSecurityManager. The AgentSecurity-
Manager surveys the execution of an agent in terms of le system
and network accesses, and resource consumption, and causes an
agent to abort if the agent attempts some unauthorized action.
The prototype is su-cient for evaluating the practicality of our
approach, but it needs to be enhanced from the standpoint of performance
and scalability of the host instantiation process. The software
repository of architectural elements available on the host is actually
composed of a small set of elements, enabling sequential search. Further
work is needed so as to experiment with a large software repository. In
addition, our prototype implementation would obviously benet from
enhanced products based on the Java technology. Products like SUN's
may provide for substantial improvements to our prototype.
14 V. Issarny and T. Saridakis
5. Related Work
The implicit host-agent interaction model that underlies our approach
is similar to the conventional Web agent interaction model, where the
agent describes a process and the host provides an execution environment
(e.g. HTTP-based Mobile Agents [11]). Yet, in our case the
execution environment is prepared by the host according to the agent's
execution properties, which implies that the execution environment
is explicitly dened by the agent, and that an agent and its execution
environment are strongly coupled. From this standpoint our Web
agents resemble Mobile Ambients [4], which dene the mobile code and
its bounded execution environment as a single entity that can move
across Web's administrative domains. However, a mobile ambient is
a modeling entity used as the structural element of a system model
described in the ambient calculus, whereas our Web agent is a set of
execution properties describing the behavior of a piece of software. The
conceptual similarity of mobile ambients and our Web agents suggests
that the ambient calculus can be employed to provide a formal model
of the host-agent interactions described in this paper.
At the practical level, our prototype deals with the safety problems
stemming from inappropriate resource use by mapping an agent to an
individual process. Although this approach assures that problems like
memory address violation will aect only the execution of the agent
that caused them, it restricts severely the number of agents that can
execute concurrently in a given host. A dierent approach is suggested
by Proof-Carrying Code, or PCC for short, which has been used to
support safe execution of mobile agents [15]. In PCC agents carry a
proof that they conform to the host's policies and the host is capable of
verifying the validity of this proof. Hence, agents can be safely mapped
to threads which share their address space with other agents or even
host threads. In this sense, PCC provides an elegant alternative to
the heavy hosting scheme used by our prototype, for the extra cost
of building a proof at the agent's originator and verifying its validity
at the host. However, PCC does not provide any support for nonfunctional
execution properties, nor support of any type for the agent's
requirements from the host. Hence, it is not directly comparable to our
approach. Rather, it should be considered as a alternative approach for
resource management in prototype.
In general, our approach for hosting Web agents does not aim at
replacing existing environments supporting mobile agents. Instead, it
aims at providing a unifying way for describing agents' behavior in
terms of their functional and nonfunctional execution properties, and
considering their hosting based on that behavior. In our framework,
Customized Remote Execution of Web Agents 15
issues related to code mobility and its remote execution in foreign hosts
can be uniformly expressed as execution properties. Execution properties
integrate agent characteristics regarding both mobility support
primitives (e.g. attach, move, and clone [11]), and non-functional requirements
(e.g. authentication, availability, secrecy, and integrity [22]).
Hence, approaches like Mobile Assistant Programming [16] that provides
the underlying framework for creating agents and moving them
to dierent hosts, can gain signicant benets from the presented
approach, in terms of
exible agent hosting.
6. Conclusion
In this paper, we have proposed a framework for reasoning on the
acceptance of agents and for constructing execution environments customized
to agents' requirements. The proposed framework conveniently
adapts existing technology from the elds of distributed systems and
formal specications to the needs of Web agents. Our approach is based
on the description of open software architectures characterizing the
execution properties provided by agents and expected from the host
environment. The proposed framework suggests that hosting an agent
consists of verifying the compatibility of the agent's requirements with
the host's policies, and then customizing the host environment to meet
the approved requirements. Our proposal is easy to use in the simple
case (i.e. the interface declaration for agents that do not have any non-functional
requirements is equally easy to the declaration of a Corba
while it supports the declaration of complex execution properties
without sacricing the functionality, performance, or safety of
the host (i.e. the customized execution environment guarantees both
agent's requirements and host's constraints).
Besides the benets for the agent, the presented approach provides
support for modular constructions of host environments, which results
in
exible and scalable Web servers. The customization of the agent's
execution environment permits the use of resource management algorithms
which allow the host to concurrently serve more than one agent
without signicant impacts on its performance. Moreover, modular
constructions allow the Web servers to apply dierent hosting poli-
cies, according to various criteria based on agent characteristics. As a
consequence, a number of other research issues can be addressed in the
Web agent framework, including how a host can use its current state
for deciding to accept an agent, how to advertise, dispose of, and cost
its resources, what combinations of non-functional properties should
V. Issarny and T. Saridakis
the host accept, how to assure fair treatment of agents with equivalent
requirements, etc.
--R
Intelligent Agents.
A Formal Basis for Architectural Connection.
Dealing with Multi-Policy Security in Large Open Distributed Systems
Mobile Ambients.
Failures Divergence Re
Common Object Services Speci
Achieving Middleware Customization in a Con
Characterizing Coordination Architectures According to Their Non-Functional Execution Properties
Exposing the Skeleton in the Coordination Closet.
A Framework for Classifying and Comparing Architecture Description Languages.
Correct Architecture Re- nement
Untrusted Agents using Proof-Carrying Code
Mobile Assistant Programming for E-cient Information Access on the WWW
The Inscape Environment.
Foundations for the Study of Software Architecture.
Customized Remote Execution of Web Agents.
Abstraction for Software Architectures and Tools to Support Them.
Software Architecture: Perspectives on an Emerging Discipline.
A Framework for Systematic Synthesis of Transactional Middleware.
--TR | customized execution;remote execution;mobility;execution properties;agent;software architecture;specification matching |
608635 | Verifying Compliance with Commitment Protocols. | Interaction protocols are specific, often standard, constraints on the behaviors of autonomous agents in a multiagent system. Protocols are essential to the functioning of open systems, such as those that arise in most interesting web applications. A variety of common protocols in negotiation and electronic commerce are best treated as commitment protocols, which are defined, or at least analyzed, in terms of the creation, satisfaction, or manipulation of the commitments among the participating agents.When protocols are employed in open environments, such as the Internet, they must be executed by agents that behave more or less autonomously and whose internal designs are not known. In such settings, therefore, there is a risk that the participating agents may fail to comply with the given protocol. Without a rigorous means to verify compliance, the very idea of protocols for interoperation is subverted. We develop an approach for testing whether the behavior of an agent complies with a commitment protocol. Our approach requires the specification of commitment protocols in temporal logic, and involves a novel way of synthesizing and applying ideas from distributed computing and logics of program. | Introduction
Interaction among agents is the distinguishing property of multiagent sys-
tems. However, ensuring that only the desirable interactions occur is one of
the most challenging aspects of multiagent system analysis and design. This
is especially so when the given multiagent system is meant to be used as an
open system, for example, in web-based applications.
Because of its ubiquity and ease of use, the web is rapidly becoming the
platform of choice for a number of important applications, such as trading,
supply-chain management, and in general electronic commerce. However, the
web can enforce few constraints on the agents' behavior. Current approaches
to security on the web emphasize how the different parties to a transaction
may be authenticated or how their data may be encrypted to prevent unauthorized
access. Even with authentication and controlled access, the parties
would have support beyond conventional protocol techniques (such as finite
state machine models) neither to specify the desired interactions nor to detect
any violation. However, authentication and access control are conceptually
orthogonal to ensuring that the parties behave and interact correctly. Even
when the parties are authenticated, they may act undesirably through error or
Venkatraman & Singh
malice. Conversely, the parties involved may resist going through authentica-
tion, but may be willing to be governed by the applicable constraints.
The web provides an excellent infrastructure through which agents can
communicate with one another. But the above problems are exacerbated when
agents are employed in the web. In contrast with traditional programs and in-
terfaces, neither their behaviors and interactions nor their construction is fixed
or under the control of a single authority. In general, in an open system, the
member agents are contributed by several sources and serve different inter-
ests. Thus, these agents must be treated as
autonomous-with few constraints on behavior, reflecting the independence
of their users, and
heterogeneous-with few constraints on construction, reflecting the independence
of their designers.
Effectively, the multiagent system is specified as a kind of standard that its
member agents must respect. In other words, the multiagent system can be
thought of as specifying a protocol that governs how its member agents must
act. For our purposes, the standard may be de jure as created by a standards
body, or de facto as may emerge from practice or even because of the arbitrary
decisions of a major vendor or user organization. All that matters for us is that
a standard imposes some restrictions on the agents. Consider the fish-market
protocol as an example of such a standard protocol [14].
Example 1. In the fish-market protocol, we are given agents of two roles: a
single auctioneer and one or more potential bidders. The fish-market protocol
is designed to sell fish. The seller or auctioneer announces the availability of
a bucket of fish at a certain price. The bidders gathered around the auctioneer
can scream back Yes if they are interested and No if they are not; they may
also stay quiet, which is interpreted as a lack of interest or No. If exactly one
bidder says Yes, the auctioneer will sell him the fish; if no one says Yes, the
auctioneer lowers the price; if more than one bidder says Yes, the auctioneer
raises the price. In either case, if the price changes, the auctioneer announces
the revised price and the process iterates.
Because of its relationship to protocols in electronic commerce and because it
is more general than the popular English and Dutch auctions, the fish-market
protocol has become an important one in the recent multiagent systems liter-
ature. Accordingly, we use it as our main example in this paper.
Because of the autonomy and heterogeneity requirements of open sys-
tems, compliance testing can be based neither on the internal designs of the
agents nor on concepts such as beliefs, desires, and intentions that map to internal
representations [16]. The only way in which compliance can be tested
aamas.tex; 20/02/1999; 16:40; no v.
Compliance with Commitment Protocols 3
is based on the behavior of the participating agents. The testing may be performed
by a central authority or by any of the participating agents. However,
the requirements for behavior in multiagent systems can be quite subtle. Thus,
along with languages for specifying such requirements, we need corresponding
techniques to test compliance.
1.1. COMMITMENTS IN AN OPEN ARCHITECTURE
There are three levels of architectural concern in a multiagent system. One
deals with individual agents; another deals with the systemic aspects of how
different services and brokers are arranged. Both of these have received much
attention in the literature. In the middle is the multiagent execution architec-
ture, which has not been as intensively studied within the community. An execution
architecture must ultimately be based on distributed computing ideas
albeit with an open flavor, e.g., [1, 5, 11]. A well-defined execution functionality
can be given a principled design, and thus facilitate the construction of
robust and reusable systems. Some recent work within multiagent systems,
e.g., Ciancarini et al. [8, 9] and Singh [18], has begun to address this level.
Much of the work on this broad theme, however, focuses primarily on co-
ordination, which we think of as the lowest level of interaction. Coordination
deals with how autonomous agents may align their activities in terms of what
they do and when they do it. However, there is more to interaction in gen-
eral, and compliance in particular. Specifically, interaction must include some
consideration of the commitments that the agents enter into with each other.
The commitments of the agents are not only base-level commitments dealing
with what actions they must or must not perform, but also metacommitments
dealing with how they will adjust their base-level commitments [20]. Commitments
provide a layer of coherence to the agents' interactions with each
other. They are especially important in environments where we need to model
any kind of contractual relationships among the agents.
Such environments are crucial wherever open multiagent systems must be
composed on the fly, e.g., in electronic commerce of various kinds on the
Internet. The addition of commitments as an explicit first-class object results
in considerable flexibility of how the protocols can be realized in changing
situations. We term such augmented protocols commitment protocols.
Example 2. We informally describe the protocol of Example 1 in terms of
commitments. When a bidder says Yes, he commits to buying the bucket of
fish at the advertised price. When the auctioneer advertises a price, he commits
that he will sell fish at that price if he gets a unique Yes. Neither commitment
is irrevocable. For example, if the fish are spoiled, the auctioneer
releases the bidder from paying for them. Specifying all possibilities in terms
of irrevocable commitments would complicate each commitment, but would
still fail to capture the practical meanings of such a protocol. For instance,
the auctioneer may not honor his offering price if a sudden change in weather
indicates that fishing will be harder for the next few days.
1.2. COMPLIANCE IN OPEN SYSTEMS
The existence of standardized protocols is necessary but not sufficient for the
correct functioning of open multiagent systems. We must also ensure that the
agents behave according to the protocols. This is the challenge of compliance.
However, unlike in traditional closed systems, verifying compliance in open
systems is practically and even conceptually nontrivial.
Preserving the autonomy and heterogeneity of agents is crucial in an open
environment. Otherwise, many applications would become infeasible. Con-
sequently, protocols must be specified as flexibly as possible without making
untoward requirements on the participating agents. Similarly, an approach for
testing compliance must not require that the agents are homogeneous or impose
stringent demands on how they are constructed.
Consequently, in open systems, compliance can be meaningfully expressed
only in terms of observable behavior. This leads to two subtle consid-
erations. One, although we talk in terms of behavior, we must still consider
the high-level abstractions that differentiate agents from other active objects.
The focus on behavior renders approaches based on mental concepts ineffective
[16]. However, well-framed social constructs can be used. Two, we must
clearly delineate the role of the observer who assesses compliance.
1.3. CONTRIBUTIONS
The approach developed here treats multiagent systems as distributed sys-
tems. There is an underlying messaging layer, which delivers messages asynchronously
and, for now, reliably. However, the approach assumes for simplicity
that the agents are not malicious and do not forge the timestamps on
the messages that they send or receive.
The compliance testing is performed by any observer of the system-
typically, a participating agent. Our approach is to evaluate temporal logic
specifications with respect to locally constructed models for the given ob-
server. The model construction proposed here employs a combination of the
notion of potential causality and operations on social commitments (both described
below). Our contributions are in
incorporating potential causality in the construction of local models
identifying patterns of messages corresponding to different operations on
commitments
showing how to verify compliance based on local information.
Compliance with Commitment Protocols 5
Our approach also has important ramifications on agent communication in
general, which we discuss in Section 4.
Organization. The rest of this paper is organized as follows. Section 2
presents our technical framework, which combines commitments, potential
causality, and temporal logic. Section 3 presents our approach for testing
(non-)compliance of agents with respect to a commitment protocol. Section 4
concludes with a discussion of our major themes, the literature, and the important
issues that remain outstanding.
2. Technical Framework
Commitment protocols as defined here are a multiagent concept. They are
far more flexible and general than commitment protocols in distributed computing
and databases, such as two-phase commit [12, pp. 562-573]. This is
because our underlying notion of commitment is flexible, whereas traditional
commitments are rigid and irrevocable. However, because multiagent systems
are distributed systems and commitment protocols are protocols, it is natural
that techniques developed in classical computer science will apply here.
Accordingly, our technical framework integrates approaches from distributed
computing, logics of program, and distributed artificial intelligence.
2.1. POTENTIAL CAUSALITY
The key idea behind potential causality is that the ordering of events in a
distributed system can be determined only with respect to an observer [13]. If
event e precedes event f with respect to an observer, then e may potentially
cause f . The observed precedence suggests the possibility of an information
flow from e to f , but without additional knowledge of the internals of the
agents, we cannot be sure that true causation was involved. It is customary
to define the local time of an agent as the number of steps it has executed. A
vector clock is a vector, each of whose elements corresponds to the local time
of each communicating agent. A vector v is considered later than a vector u
if v is later on some, and not sooner on any, element.
Definition 1. A clock over n agents is an n-ary vector
natural numbers. The starting clock is ~ 0 4
Notice that the vector representation is just a convenience. We could just as
well use pairs of the form hagent-id, local-timei, which would allow
us to model systems of varying membership more easily.
Definition 2. Given n-ary vectors u and v, u OE v if and only if
6 Venkatraman & Singh
Each agent starts at ~ 0. It increments its entry in that vector whenever it performs
a local event [15]. It attaches the entire vector as a timestamp to any
message it sends out. When an agent receives a message, it updates its vector
clock to be the element-wise maximum of its previous vector and the vector
timestamp of the message it received. Intuitively, the message brings news of
how far the system has progressed; for some agents, the recipient may have
better news already. However, any message it sends after this receive event
will have a later timestamp than the message just received.000000000111111111000000000000000000000000000111111111111111111111111111000000000000000000111111111111111111000000000000000000111111111111111111000000000111111111000000000111111111111111111
Auctioneer A Bidder B1 Bidder B2
[1,1,0]
[3,2,0]
money
fish
Figure
1. Vector clocks in the fish-market protocol.
Example 3. Figure 1 illustrates the evolution of vector timestamps for one
possible run of the fish-market protocol. In the run described here, the auctioneer
(A) announces a price of 50 for a certain bucket of fish. Bidders B1
and B2 both decline. A lowers the price to 40 and announces it. This time
leading A to transfer the fish to B1 and B1 to send money to A.
For uniformity, the last two steps are also modeled as communications. The
messages are labeled m i to facilitate reference from the text.
2.2. TEMPORAL LOGIC
The progression of events, which is inherent in the execution of any protocol,
suggests the need for representing and reasoning about time. Temporal logics
aamas.tex; 20/02/1999; 16:40; no v.
Compliance with Commitment Protocols 7
provide a well-understood means of doing so, and have been applied in various
subareas of computer science. Because of their naturalness in expressing
properties of systems that may evolve in more than one possible way and for
the efficiency of reasoning that they support, the branching-time logics have
been especially popular in this regard [10]. Of these, the best known is Computation
Tree Logic (CTL), which we adapt here in our formal language L.
Conventionally, a model of CTL is expressed as a tree. Each node in the tree
is associated with a state of the system being considered; the branches of the
tree or paths thus indicate the possible courses of events or ways in which the
system's state may evolve. CTL provides a natural means by which to specify
acceptable behaviors of the system.
The following Backus-Naur Form (BNF) grammar with a distinguished
start symbol L gives the syntax of L. L is based on a set \Phi of atomic propo-
sitions. Below, slant typeface indicates nonterminals; \Gamma! and j are meta-
symbols of BNF specification; - and AE delimit comments; the remaining
symbols are terminals. As is customary in formal semantics, we are only concerned
with abstract syntax.
L1. L \Gamma! Prop -atomic propositions: members of \Phi AE
L2.
L3. L \Gamma! L - L -conjunctionAE
L4. L \Gamma! A P -universal quantification over pathsAE
L5. L \Gamma! E P -existential quantification over pathsAE
L6. P \Gamma! L U L -until: operator over a single pathAE
The meanings of formulas generated from L are given relative to a model and
a state in the model. The meanings of formulas generated from P are given
relative to a path and a state on the path. The boolean operators are standard.
Useful abbreviations include false j (p -:p), for any p 2 \Phi, true j :false,
q. The temporal operators A and E
are quantifiers over paths. Informally, pUq means that on a given path from
the given state, q will eventually hold and p will hold until q holds. Fq means
"eventually q" and abbreviates trueUq. Gq means "always q" and abbreviates
:F:q. Therefore, EpUq means that on some future path from the given state,
q will eventually hold and p will hold until q holds.
is a formal model for L. S is a set of states;
S \Theta S is a partial order indicating branching time, and I : S 7! P (\Phi) is
an interpretation, which tells us which atomic propositions are true in a given
state. For t 2 S, P t is the set of paths emanating from t.
at t" and M
p at t along path P ."
M1. M
M2. M
M3. M
M4. M
M5. M
q and
The above is an abstract semantics. In Section 3.3, we specify the concrete
form of \Phi, S, !, and I, so the semantics can be exercised in our computations.
3. Approach
In their generic forms, both causality and temporal logic are well-known.
However, applying them in combination and in the particular manner suggested
here is novel to this paper.
Temporal logic model checking is usually applied for design-time reasoning
[10, pp. 1042-1046]. We are given a specification and an implementation,
i.e., program, that is supposed to meet it. A model is generated from the pro-
gram. A model checking algorithm determines whether the specification is
true in the generated model. However, in an open, heterogeneous environ-
ment, a design may not be available at all. For example, the vendors who
supply the agents may consider their designs to be trade secrets.
By contrast, ours is a run-time approach, and can meaningfully apply
model checking even in open settings. This is because it uses a model generated
from the joint executions of the agents involved. Model checking in this
setting simply determines whether the present execution satisfies the specifi-
cation. If an execution respects the given protocol, that does not entail that all
executions will, because an agent act inappropriately in other circumstances.
However, if an execution is inappropriate, that does entail that the system
does not satisfy the protocol. Consequently, although we are verifying specific
executions of the multiagent system, we can only falsify (but not verify)
the correctness of the construction of the agents in the system.
Model checking of the form introduced above may be applied by any observer
in the multiagent system. A useful case is when the observer is one
of the participating agents. Another useful case is when the observer is some
aamas.tex; 20/02/1999; 16:40; no v.
Compliance with Commitment Protocols 9
agent dedicated to the task of managing or auditing the interactions of some
of the agents in the multiagent system.
Potential causality is most often applied in distributed systems to ensure
that the messages being sent in a system satisfy causal ordering [3]. Causality
motivates vector clocks and vector timestamps on messages, which help
ensure correct ordering by having the messaging subsystem reorder and re-transmit
messages as needed. This application of causality can be important,
but is controversial [4, 6], because its overhead may not always be justifiable.
In our approach, the delivery of messages may be noncausal. However,
causality serves the important purpose of yielding accurate models of the observations
of each agent. These are needed, because in a distributed system,
the global model is not appropriate. Creating a monolithic model of the execution
of the entire system requires imposing a central authority through
which all messages are routed. Adding such an authority would take away
many of the advantages that make distributed systems attractive in the first
place. Consequently, our method of constructing and reasoning with models
should
not require a centralized message router
\Gamma work from a single vantage of observation, but be able to handle situations
where some agents pool their evidence.
Such a method turns out to naturally employ the notion of potential causality.
3.1. MODELS FROM OBSERVATIONS
The observations made by each agent are essentially a record of the messages
it has sent or received. Since each message is given a vector timestamp, the
observations can be partially ordered. In general, this order is not total, because
messages received from different agents may be mutually unordered.
Example 4. Figure 2 shows the models constructed locally from the observations
of the auctioneer and a bidder in the run of Example 3.
Although a straightforward application of causality, the above example shows
how local models may be constructed. Some subtleties are discussed next.
As remarked above, commitments give the core meaning of a protocol.
Our approach builds on a flexible and powerful variety of social commit-
ments, which are the commitments of one agent to another [20]. These commitments
are defined relative to a context, which is typically the multiagent
system itself. The debtor refers to the agent that makes a commitment, and
the creditor to the agent who receives the commitment. Thus we have the
following logical form.
[1,0,0]
[3,2,0]
[1,0,0]
[1,2,0]
Auctioneer A Bidder B1
start
start
Figure
2. Observations for auctioneer and a bidder in the fish-market protocol.
Definition 4. A commitment is an expression C(x; G; p), where x is the
debtor, y the creditor, G the context, and p the condition committed to.
The expression c is considered true in states where the corresponding commitment
exists.
Definition 5. A commitment G; p) is base-level if p does not refer
to any other commitments; c is a metacommitment if p refers to a base-level
commitment (we do not consider higher-order commitments here).
Intuitively, a protocol definition is a set of metacommitments for the different
roles (along with a mapping of the message tokens to operations on commit-
ments). In combination with what the agents communicate, these lead to base-level
commitments being created or manipulated, which is primarily how a
commitment may be referred to within a protocol. The violation of a base-level
commitment can give us proof or the "smoking gun" that an agent is
noncompliant.
The following operations on commitments define how they may be created
or manipulated. When we view commitments as an abstract data type,
the operations are methods of that data type.
Each operation is realized through a simple message pattern, which states
what messages must be communicated among which of the participants and
in what order. For the operations on commitments we consider, the patterns
aamas.tex; 20/02/1999; 16:40; no v.
Compliance with Commitment Protocols 11
are simple. As described below, most patterns require only a single message,
but some require three messages. Obeying the specified patterns ensures that
the local models have the information necessary for testing compliance. That
the given operation can be performed at all depends on whether the proto-
col, through its metacommitments, allows that operation. However, when an
operation is allowed, it affects the agents' commitments. For simplicity, we
assume that the operations on commitments are given a deterministic inter-
pretation. Here z is an agent and G; p) is a commitment.
O1. Create(x; c) instantiates a commitment c. Create is typically performed
as a consequence of the commitment's debtor promising something contractually
or by the creditor exercising a metacommitment previously
made by the debtor. Create usually requires a message from the debtor
to the creditor.
O2. Discharge(x; c) satisfies the commitment c. It is performed by the debtor
concurrently with the actions that lead to the given condition being sat-
isfied, e.g., the delivery of promised goods or funds. For simplicity, we
treat the discharge actions as performed only when the proposition p is
true. Thus the discharge actions are detached, meaning that p can be
treated as true in the given moment. We model the discharge as a single
message from the debtor to the creditor.
O3. Cancel(x; c) revokes the commitment c. It can be performed by the
debtor as a single message. At the end of this action, :c usually holds.
However, depending on the existing metacommitments, the cancel of one
commitment may lead to the create of other commitments.
O4. Release(G; c) or release(y; c) essentially eliminates the commitment c.
This is distinguished from both discharge and cancel, because release
does not mean success or failure, although it lets the debtor off the hook.
At the end of this action, :c usually holds. The release action may be
performed by the context or the creditor of the given commitment, also
as a single message. Because release is not performed by the debtor,
different metacommitments apply than for cancel.
O5. Delegate(x; z; c) shifts the role of debtor to another agent within the
same context, and can be performed by the (old) debtor (or the context).
G; p). At the end of the delegate action, c 0 - :c holds.
To prevent the risk of miscommunication, we require the creditor to also
be involved in the message pattern. Figure 3(l) shows the associated pat-
tern. The first message sets up the commitment c from x to y and is not
part of the pattern. When x delegates the commitment c to z, x tells both
y and z that the commitment is delegated. z is now committed to y. Later
aamas.tex; 20/02/1999; 16:40; no v.
delegate(x,z,c)
y z x y z
assign(y,z,c)
create(x,c)
assign(y,z,c)
delegate(x,z,c)
discharge(x,c)
discharge(x,c)
x
create(x,c)
Figure
3. Message pattern for delegate (l) and assign (r).
z may discharge the commitment. The two delegate messages constitute
the pattern.
O6. Assign(y; z; c) transfers a commitment to another creditor within the
same context, and can be performed by the present creditor or the con-
text. Let c G; p). At the end of the assign action, c 0 -:c holds.
Here we require that the new creditor and the debtor are also involved as
shown in Figure 3(r). The figure shows only the general pattern. Here x
is committed to y. When y assigns the commitment to z, y tells both x
and z (so z knows it is the new creditor). Eventually, x should discharge
the commitment to z. A potentially tricky situation is if x discharges the
commitment c even as y is assigning c to z (i.e., the messages cross).
In this case, we require y to discharge the commitment to z-essentially
by forwarding the contents of the message from x. Thus the worst case
requires three messages.
We write the operations as propositions indicating successful execution.
Based on the applicable metacommitments, each operation may entail additional
operations that take place implicitly.
Definition 6. A commitment c is resolved through a release, discharge, can-
cel, delegate, or assign performed on c. c ceases to exist when resolved. How-
ever, a new commitment is created for delegate or assign.
(New commitments created because of some existing metacommitment are
not included in the definition of resolution. Theorem 1 states that the creditor
knows the disposition of any commitments due to it. This result helps establish
that the creditor can always determine compliance of others relative to
what was committed to it.
Compliance with Commitment Protocols 13
Theorem 1. If message m i creates commitment c and message m j resolves
c, then the creditor of c sees both m i and m j .
Proof. By inspection of the message patterns constructed for the various operations
on commitments.
Definition 7. A commitment c is ultimately resolved through a release, dis-
charge, or cancel performed on c, or through the ultimate resolution of any
commitments created by the delegate or assign of c.
Theorem 2 essentially states that the creation and ultimate resolution of a
commitment occur along the same causal path. This is important, because
it legitimizes a significant optimization below. Indeed, we defined the above
message patterns so we would obtain Theorem 2.
Theorem 2. If message m i creates commitment c and message m j ultimately
resolves
Proof. By inspection of the message patterns constructed for the various operations
on commitments.
3.2. SPECIFYING PROTOCOLS
We first consider the coordination and then the commitment aspects of com-
pliance. A skeleton is a coarse description of how an agent may behave [18].
A skeleton is associated with each role in the given multiagent system to
specify how an agent playing that role may behave in order to coordinate
with others. Coordination includes the simpler aspects of interaction, e.g.,
turn-taking. Coordination is required so that the agents' commitments make
sense. For instance, a bidder should not make a bid prior to the advertise-
ment; otherwise, the commitment content of the bid would not even be fully
defined.
The skeletons may be constructed by introspection or through the use of
a suitable methodology [19]. No matter how they are created, the skeletons
are the first line of compliance testing, because an agent that does not comply
with the skeleton for its role is automatically in violation. So as to concentrate
on commitments in this paper, we postulate that a "proxy" object is interposed
between an agent and the rest of the system and ensures that the agent follows
the dictates of the skeleton of its role.
We now define the syntax of the specification language through the following
whose start symbol is Protocol. The braces f and g indicate
that the enclosed item is repeated 0 or more times.
L7. Protocol \Gamma! fMetag fMessageg
aamas.tex; 20/02/1999; 16:40; no v.
14 Venkatraman & Singh
L8. Message \Gamma! Token: Commitment -messages correspond to
L9. Meta \Gamma! C(Debtor, Creditor, Context, MetaProp)
L11. Bool \Gamma! -Boolean combinations ofAE Act j Commitment j Dom
L12. Act \Gamma! Operation(Agent, Commitment)
L13. Operation \Gamma! -the six operations of Section 3.1AE
L14. Commitment \Gamma! Meta j C(Debtor, Creditor, Context, AFDom)
L15. Dom \Gamma! -domain-specific conceptsAE
The above language embeds a subset of L. Our approach is to detach the outer
actions and commitments, so we can process the inner L part as a temporal
logic. By using commitments and actions on them, instead of simple domain
propositions, we can capture a variety of subtle situations, e.g., to distinguish
between release and cancel both of which result in the given commitment
being removed.
Example 5 applies the above language on the fish-market protocol.
Example 5. The messages in Figure 1 can be given a content based on the
following definitions. Here FM is the fish-market context.
proposition meaning the fish is delivered
proposition meaning that the appropriate money is
paid (subscripted to allow different prices)
an abbreviation for
))])-meaning the bidder
promises to pay money i if given the fish
an abbreviation for
AFfish))])-meaning the auctioneer offers
to deliver the fish if he gets a bid for money i
an abbreviation for
))-meaning that at least two bidders have bid for the fish
at price i
proposition meaning the fish is spoiled
Armed with the above, we can now state the commitments associated with
the different messages in the fish market protocol.
Compliance with Commitment Protocols 15
Payment of i from
Delivering fish to
\Gamma Yes from B j (for price i): create(B
\Gamma No from B j (for price i): true
Further, the protocol includes metacommitments that are not associated with
any single message. In the present protocol, these metacommitments are of
the context itself to release a committing party under certain circumstances.
For practical purposes, we could treat these as metacommitments of the creditor
Bad
In addition, in a monotonic framework, we would also need to state the completion
requirements to ensure that only the above actions are performed.
The auctioneer does not commit to a price if no bid is received. If more
than one bid is received, the auctioneer is released from the commitment. Notice
that the auctioneer can exit the market or adjust the price in any direction
if a unique Yes is not received for the current price money i . It would neither
be rational for the auctioneer to raise the price if there are no takers at
the present price, nor to lower the price if takers are available. However, the
protocol per se does not legislate against either behavior.
The No messages have no significance on commitments. They serve only to
assist in the coordination so the context can determine if enough bids are
received. The lower-level aspects of coordination are not being studied in this
paper. Now we can see how the reasoning takes place in a successful run of
the protocol.
Example 6. The auctioneer sends out an advertisement, which commits
the auctioneer to supplying the fish if he receives a suitable
bid. This commitment will be discharged if AG[Bid i (B
holds. When Bid i (B j ) is sent by B j ,
the bidder is committed to the bid, which is discharged if AG[fish !
holds. To discharge the adver-
tisement, the auctioneer must eventually create a commitment to eventually
supply the fish. If he does not create this commitment, he is in violation. If he
aamas.tex; 20/02/1999; 16:40; no v.
Venkatraman & Singh
creates it, but does not supply the fish, he is still in violation. If he supplies
the fish, the bidder is then committed to eventually forming a commitment to
supply the money. If the bidder does so, the protocol is executed successfully.
3.3. REASONING WITH THE CONCRETE MODEL
Now we explain the main reasoning steps in our approach and show that they
are sound. The main reasoning with models applies the CTL model-checking
algorithm on a model and a formula denoting the conjunction of the specifi-
cations. The algorithm evaluates whether the formula holds in the initial state
of the model. Thus a concrete version of the model M (see Section 2.2) is es-
sential. For the purposes of the semantics, we must define a global model with
respect to which commitment protocols may be specified. Intuitively, a protocol
specification tells us which behaviors of the entire system are correct.
Thus, it corresponds naturally to a global model in which those behaviors can
be defined.
Our specific concrete model identifies states with messages. Recall that
the timestamp of a message is the clock vector attached to it. The states are
ordered according to the timestamps of the messages. The proposition true
in a state is the one corresponding to the operation that is performed by the
message.
Definition 8.
Definition 9. For s;
Definition 10. For s 2 Q, operations executed by message sg
The structure is a quasimodel. (Here and below, we assume
that ! and I are appropriately projected to the available states.) MQ is structurally
a model, because it matches the requirements of Definition 3. How-
ever, MQ is not a model of the computations that may take place, because the
branches in MQ are concurrent events and do not individually correspond to a
single path. A quasimodel can be mapped to a model, M with an
initial state ~ 0, by including all possible interleavings of the transitions. That
is, S would include a distinct state for every message in each possible ordering
of the messages in Q that is consistent with the temporal order ! of MQ .
The relation ! can be suitably defined for M S . However, there is potentially
an exponential blowup in that the size of S may be exponentially greater than
the size of Q.
Theorem 3 shows that naively treating a quasimodel as if it were a model is
correct. Thus, the above blowup can be eliminated entirely. Our construction
aamas.tex; 20/02/1999; 16:40; no v.
Compliance with Commitment Protocols 17
ensures that all the events relevant to another event are totally ordered with
respect to each other. Notice that, as showing in Figure 3, the construction
may appear to require one more message than necessary for the assign and
delegate operations. This linear amount of extra work (for the entire set of
messages), however, pays off in reducing the complexity of our reasoning
algorithm. In the following, p refers to the proposition (of the form AG[q !
AFr]) of a metacommitment, which becomes true when the metacommitment
is discharged.
Definition 11. For a proposition p, p T is the proposition obtained by substituting
EF for AF in p.
Theorem 3. MQ
p.
Proof. From Theorem 2 and the restricted structure of MQ .
The above results show that compliance can be tested and without blowing
up the model unnecessarily. However, we would like to test for compliance
based on local information-so that any agent can decide for itself whether
it has been wronged by another. For this reason, we would like to be able to
project the global model onto local models for each agent, while ensuring that
the local models carry enough information that they are indeed usable in isolation
from other local models. Accordingly, we can define the construction
of local models corresponding to an agent's observations. This is simply by
defining a subset of S for a given agent a.
Definition 12. S a is a message from or to ag. M a = hS a ; !; Ii.
Theorem 4 shows that if we restrict attention to commitments that the given
agent can observe, then the projected quasimodel yields all and only the correct
conclusions relative to the global quasimodel. Thus, if the interested party
is vigilant, it can check if anyone else violated the protocol.
Theorem 4. M a
only if MQ
that a sees all
the commitments mentioned in p.
Proof. From Theorem 2 and the construction of M a .
Example 7. If one of the bidders backs down from a successful bid, the auctioneer
immediately can establish that he is cheating, because the auctioneer
is the creditor for the bidder's commitment. However, a bidder cannot ordinarily
decide whether the auctioneer is noncompliant, because the bidder
does not see all relevant commitments based on which the auctioneer may be
released from a commitment to the bidder.
Theorem 5 lifts the above results to sets of agents. Thus, a set of agents may
pool their evidence in order to establish whether a third party is noncompliant.
Venkatraman & Singh
Thus, in a setting with two bidders, a model that includes all their evidence
can be used to determine whether the auctioneer is noncompliant. Ordinarily,
the bidders would have to explicitly pool their information to do so. However,
in a broadcast-based or outcry protocol (like a traditional fish market in which
everyone is screaming), the larger model can be built by anyone who hears
all the messages. Let A be a set of agents.
Definition 13.
a2A S a .
Theorem 5. Let the commitments observed by agents in A include all the
commitments in p. Then MA
Proof. From Theorem 2 and the construction of MA .
Information about commitments that have been resolved, i.e., are not
pending, is not needed in the algorithm, and can be safely deleted from each
observer's model. This is accomplished by searching backward in time whenever
something is added to the model. Pruning extraneous messages from
each observer's model reduces the size of the model and facilitates reasoning
about it. This simplification is sound, because the CTL specifications do not
include nested commitments.
Mapping from an event-based to a state-based representation, we should
consider every event as potentially corresponding to a state change. This approach
would lead to a large model, which accommodates not only the occurrence
of public events such as message transmissions, but also local events.
Such an approach would thus capture the evolution of the agent's knowledge
about the progress of the system, which would help in accommodating unreliable
messaging. Our approach, as described above, loses some of the agents'
knowledge by not separating events and states, but has all the details we need
to assess compliance assuming reliable messaging.
4. Discussion
Given the autonomy and heterogeneity of agents, the most natural way to treat
interactions is as communications. A communication protocol involves the
exchange of messages with a streamlined set of tokens. Traditionally, these
tokens are not given any meaning except through reference to the beliefs or
intentions of the communicating agents. By contrast, our approach assigns
public, i.e., observable, meanings in terms of social commitments. Viewed in
this light, every communication protocol is a commitment protocol.
Formulating and testing compliance of autonomous and heterogeneous
agents is a key prerequisite for the effective application of multiagent systems
in open environments. As asserted by Chiariglione, minimal specifications
based on external behavior will maximize interoperability [7]. The research
Compliance with Commitment Protocols 19
community has not paid sufficient attention to this important requirement. A
glaring shortcoming of most existing semantics for agent communication languages
is their fundamental inability to allow testing for the compliance of an
agent [16, 22]. The present approach shows how that might be carried out.
While the purpose of the protocols is to specify legal behavior, they should
not specify rational behavior. Rational behavior may result as an indirect consequence
of obeying the protocols. However, not adding rationality requirements
leads to more succinct specifications and also allows agents to participate
even if their rationality cannot be established by their designers.
The compliance checking procedure can be used by any agent who participates
in, or observes, a commitment protocol. There are two obvious uses.
One, the agent can track which of the commitments made by others are pending
or have been violated. Two, it might track which of its own commitments
are pending or whose satisfaction has not been acknowledged by others. The
agent can thus use the compliance checking procedure as an input to its normal
processes of deliberation to guide its interactions with other agents.
We have so far discussed how to detect violations. Once an agent detects
a violation, as far as the above method is concerned, it may proceed in any
way. However, some likely candidates are the following. The wronged agent
may
inform the agents who appeared to have violated their commitments and
ask them to respect the applicable metacommitments
inform the context, who might penalize the guilty parties, if any; the context
may require additional information, e.g., certified logs of the messages
sent by the different agents, to establish that some agents are in
violation.
agents in an attempt to spoil the reputation of the guilty
parties.
4.1. LITERATURE
Some of the important strands of research of relevance to commitment protocols
have been carried out before. However, the synthesis, enhancement,
and application of these techniques on multiagent commitment protocols is a
novel contribution of this paper. Interaction (rightly) continues to draw much
attention from researchers. Still, most current approaches do not consider an
explicit execution architecture (however, there are some notable exceptions,
e.g., [8, 9, 18]). Other approaches lack a formal underpinning; still others
focus primarily on monolithic finite-state machine representations for proto-
cols. Such representations can capture only the lowest levels of a multiagent
Venkatraman & Singh
interaction, and their monolithicity does not accord well with distributed execution
and compliance testing. Model checking has recently drawn much attention
in the multiagent community, e.g., [2, 17]. However, these approaches
consider knowledge and related concepts and are thus not directly applicable
for behavior-based compliance.
4.2. FUTURE DIRECTIONS
The present approach highlights the synergies between distributed computing
and multiagent systems. Since both fields have advanced in different direc-
tions, a number of important technical problems can be addressed by their
proper synthesis. One aspect relates to situations where the agents may suffer
a Byzantine failure or act maliciously. Such agents may fake messages or
deny receiving them. How can they be detected by the other agents? Another
aspect is to capture additional structural properties of the interactions so that
noncompliant agents can be more readily detected. Alternatively, we might
offer an assistance to designers by synthesizing skeletons of agents who participate
properly in commitment protocols. Lastly, it is well-known that there
can be far more potential causes than real causes [15]. Can we analyze conversations
or place additional, but reasonable, restrictions on the agents that
would help focus their interactions on the true relationships between their
respective computations? We defer these topics to future research.
Acknowledgements
This work is supported by the National Science Foundation under grants IIS-
9529179 and IIS-9624425, and IBM corporation. We are indebted to Feng
Wan and Sudhir Rustogi for useful discussions and to the anonymous reviewers
for helpful comments.
--R
Concurrent programming for distributed artificial intelligence.
Model checking multi-agent systems
The process group approach to reliable distributed computing.
A response to Cheriton and Skeen's criticism of causal and totally ordered communication.
Coordination languages and their significance.
Understanding the limitations of causally and totally ordered communication.
Foundation for intelligent physical agents
PageSpace: An architecture to coordinate distributed applications on the web.
Coordinating multiagent applications on the WWW: A reference architecture.
Temporal and modal logic.
Interacting Processes: A Multiparty Approach to Coordinated Distributed Programming.
Transaction Processing: Concepts and Techniques.
Detecting causal relationships in distributed computations: In search of the holy grail.
Agent communication languages: Rethinking the principles.
Applying the mu-calculus in planning and reasoning about action
A customizable coordination service for autonomous agents.
Developing formal specifications to coordinate heterogeneous autonomous agents.
An ontology for commitments in multiagent systems: Toward a unification of normative concepts.
Gerhard Wei-
Verifiable semantics for agent communication languages.
--TR
--CTR
Manfred A. Jeusfeld , Paul W. P. J. Grefen, Detection tests for identifying violators of multi-party contracts, ACM SIGecom Exchanges, v.5 n.3, p.19-28, April 2005
Ashok U. Mallya , Michael N. Huhns, Commitments Among Agents, IEEE Internet Computing, v.7 n.4, p.90-93, July
Amit K. Chopra , Munindar P. Singh, Contextualizing commitment protocol, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
Jeremy Pitt , Frank Guerin , Chris Stergiou, Protocols and intentional specifications of multi-party agent conversions for brokerage and auctions, Proceedings of the fourth international conference on Autonomous agents, p.269-276, June 03-07, 2000, Barcelona, Spain
Feng Wan , Munindar P. Singh, Formalizing and achieving multiparty agreements via commitments, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Nicoletta Fornara , Marco Colombetti, Operational specification of a commitment-based agent communication language, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2, July 15-19, 2002, Bologna, Italy
Feng Wan , Munindar P. Singh, Enabling Persistent Web Services via Commitments, Information Technology and Management, v.6 n.1, p.41-60, January 2005
Pinar Yolum , Munindar P. Singh, Flexible protocol specification and execution: applying event calculus planning using commitments, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2, July 15-19, 2002, Bologna, Italy
Ashok U. Mallya , Munindar P. Singh, Modeling exceptions via commitment protocols, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Lai Xu, A multi-party contract model, ACM SIGecom Exchanges, v.5 n.1, p.13-23, July, 2004
Pinar Yolum, Towards design tools for protocol development, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Jeremy Pitt , Lloyd Kamara , Marek Sergot , Alexander Artikis, Formalization of a voting protocol for virtual organizations, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Pnar Yolum, Design time analysis of multiagent protocols, Data & Knowledge Engineering, v.63 n.1, p.137-154, October, 2007
Chrysanthos Dellarocas , Mark Klein , Juan Antonio Rodriguez-Aguilar, An exception-handling architecture for open electronic marketplaces of contract net software agents, Proceedings of the 2nd ACM conference on Electronic commerce, p.225-232, October 17-20, 2000, Minneapolis, Minnesota, United States
Frank Guerin, Applying game theory mechanisms in open agent systems with complete information, Autonomous Agents and Multi-Agent Systems, v.15 n.2, p.109-146, October 2007
Peter McBurney , Simon Parsons, Posit spaces: a performative model of e-commerce, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Lalana Kagal , Tim Finin, Modeling conversation policies using permissions and obligations, Autonomous Agents and Multi-Agent Systems, v.14 n.2, p.187-206, April 2007
Pnar Yolum , Munindar P. Singh, Reasoning about Commitments in the Event Calculus: An Approach for Specifying and Executing Protocols, Annals of Mathematics and Artificial Intelligence, v.42 n.1-3, p.227-253, September 2004
Munindar P. Singh, Synthesizing Coordination Requirements for Heterogeneous Autonomous Agents, Autonomous Agents and Multi-Agent Systems, v.3 n.2, p.107-132, June 2000
Phillipa Oaks , Arthur Hofstede, Guided interaction: A mechanism to enable ad hoc service interaction, Information Systems Frontiers, v.9 n.1, p.29-51, March 2007
Chihab Hanachi , Christophe Sibertin-Blanc, Protocol Moderators as Active Middle-Agents in Multi-Agent Systems, Autonomous Agents and Multi-Agent Systems, v.8 n.2, p.131-164, March 2004 | protocols;formal methods;commitments;causality;temporal logic |
608643 | Semantic Issues in the Verification of Agent Communication Languages. | This article examines the issue of developing semantics for agent communication languages. In particular, it considers the problem of giving a verifiable semantics for such languagesa semantics where conformance (or otherwise) to the semantics could be determined by an independent observer. These problems are precisely defined in an abstract formal framework. Using this framework, a number of example agent communication frameworks are defined. A discussion is then presented, of the various options open to designers of agent communication languages, with respect the problem of verifying conformance. | Introduction
One of the main reasons why multi-agent systems are currently a major
area of research and development activity is that they are seen as a key
enabling technology for the Internet-wide electronic commerce systems
that are widely predicted to emerge in the near future [20]. If this
vision of large-scale, open multi-agent systems is to be realised, then the
fundamental problem of inter-operability must be addressed. It must be
possible for agents built by dierent organisations using dierent hardware
and software platforms to safely communicate with one-another
via a common language with a universally agreed semantics.
The inter-operability requirement has led to the development of
several standardised agent communication languages (acls) [30, 19].
However, to gain acceptance, particularly for sensitive applications such
as electronic commerce, it must be possible to determine whether or not
any system that claims to conform to an acl standard actually does
so. We say that an acl standard is veriable if it enjoys this property.
Unfortunately, veriability has to date received little attention by the
standards community (although it has been recognised as an issue [19,
p46]). In this article, we establish a simple formal framework that allows
us to precisely dene what it means for an acl to be veriable. This
framework is dened in section 3, following a brief discussion of the
background to this work. We then formally dene what it means for an
acl to be veriable in section 4. The basic idea is to show how demonstrating
conformance to an acl semantics can be seen as a verication
problem in the standard software engineering sense [7]. Demonstrating
c
1999 Kluwer Academic Publishers. Printed in the Netherlands.
Michael Wooldridge
that a program semantically complies to a standard involves showing
that the program satises the specication given by the semantics. If
the semantics are logical, then demonstrating compliance thus reduces
to a proof problem. We discuss the practical implications of these definitions
in section 4.1. In section 5, we give examples of some acls,
and show that some of these are veriable, while others are not. In
section 6, we discuss an alternative approach to verication, in which
verication is done via model checking rather than proof. Finally, in
section 7, we discuss the implications of our results, with emphasis on
future directions for work on veriable acls.
2. Background
Current techniques for developing the semantics of acls trace their
origins to speech act theory. In this section, we give a brief overview of
this work.
2.1. Speech Acts
The theory of speech acts is generally recognised as having begun in the
work of the philosopher John Austin [4]. Austin noted that a certain
class of natural language utterances | hereafter referred to as speech
acts | had the characteristics of actions, in the sense that they change
the state of the world in a way analogous to physical actions. It may
seem strange to think of utterances changing the world in the way
that physical actions do. If we pick up a block from a table (to use
an overworked but traditional example), then the world has changed
in an obvious way. But how does speech change the world? Austin
gave as paradigm examples declaring war and saying \I now pronounce
man and wife". Stated in the appropriate circumstances, these
utterances clearly change the state of the world in a very tangible way 1 .
Austin identied a number of performative verbs, which correspond
to various dierent types of speech acts. Examples of such performative
verbs are request, inform, and promise. In addition, Austin distinguished
three dierent aspects of speech acts: the locutionary act, or
act of making an utterance (e.g., saying \Please make some tea"), the
illocutionary act, or action performed in saying something (e.g., \He
requested me to make some tea"), and perlocution, or eect of the act
(e.g., \He got me to make tea").
1 Notice that when referring to the eects of communication, we are ignoring
\pathological" cases, such as shouting while on a ski run and causing an avalanche.
Similarly, we will ignore \microscopic" eects (such as the minute changes in pressure
or temperature in a room caused by speaking).
Semantic Issues in Agent Communication 3
Austin referred to the conditions required for the successful completion
of performatives as felicity conditions. He recognized three important
felicity conditions:
1. a) There must be an accepted conventional procedure for the performative
b) The circumstances and persons must be as specied in the
procedure.
2. The procedure must be executed correctly and completely.
3. The act must be sincere, and any uptake required must be com-
pleted, insofar as is possible.
Austin's work was rened and considerably extended by Searle, in
his 1969 book Speech Acts [38]. Searle identied several properties that
must hold for a speech act performed between a hearer and a speaker
to succeed, including normal I/O conditions, preparatory conditions,
and sincerity conditions. For example, consider a request by speaker
to hearer to perform action:
1. Normal I/O conditions. Normal I/O conditions state that hearer
is able to hear the request, (thus must not be deaf, . ), the act
was performed in normal circumstances (not in a lm or play, . ),
etc.
2. Preparatory conditions. The preparatory conditions state what must
be true of the world in order that speaker correctly choose the
speech act. In this case, hearer must be able to perform action,
and speaker must believe that hearer is able to perform action.
Also, it must not be obvious that hearer will do action anyway.
3. Sincerity conditions. These conditions distinguish sincere performances
of the request; an insincere performance of the act might
occur if speaker did not really want action to be performed.
Searle also gave a ve-point typology of speech acts:
1. Representatives. A representative act commits the speaker to the
truth of an expressed proposition. The paradigm case is informing.
2. Directives. A directive is an attempt on the part of the speaker to
get the hearer to do something. Paradigm case: requesting.
3. Commissives. Commit the speaker to a course of action. Paradigm
case: promising.
4 Michael Wooldridge
4. Expressives. Express some psychological state (e.g., gratitude). Paradigm
case: thanking.
5. Declarations. Eect some changes in an institutional state of aairs.
Paradigm case: declaring war.
2.2. Speech Acts in Artificial Intelligence
In the late 1960s and early 1970s, a number of researchers in arti-
cial intelligence (ai) began to build systems that could plan how to
autonomously achieve goals [2]. Clearly, if such a system is required
to interact with humans or other autonomous agents, then such plans
must include speech actions. This introduced the question of how
the properties of speech acts could be represented such that planning
systems could reason about them. Cohen and Perrault [15] gave an
account of the semantics of speech acts by using techniques developed
in ai planning research [18]. The aim of their work was to develop a
theory of speech acts:
\[B]y modelling them in a planning system as operators dened
. in terms of speakers and hearers beliefs and goals. Thus speech
acts are treated in the same way as physical actions". [15]
The formalism chosen by Cohen and Perrault was the strips nota-
tion, in which the properties of an action are characterised via pre-and
post-conditions [18]. The idea is very similar to Hoare logic [24].
Cohen and Perrault demonstrated how the pre- and post-conditions of
speech acts such as request could be represented in a multi-modal logic
containing operators for describing the beliefs, abilities, and wants of
the participants in the speech act.
Consider the Request act. The aim of the Request act will be for a
speaker to get a hearer to perform some action. Figure 1 denes the
Request act. Two preconditions are stated: the \cando.pr" (can-do pre-
conditions), and \want.pr" (want pre-conditions). The cando.pr states
that for the successful completion of the Request , two conditions must
hold. First, the speaker must believe that the hearer of the Request
is able to perform the action. Second, the speaker must believe that
the hearer also believes it has the ability to perform the action. The
want.pr states that in order for the Request to be successful, the speaker
must also believe it actually wants the Request to be performed. If
the pre-conditions of the Request are fullled, then the Request will be
successful: the result (dened by the \eect" part of the denition) will
be that the hearer believes the speaker believes it wants some action
to be performed.
Semantic Issues in Agent Communication 5
Preconditions Cando.pr (S BELIEVE (H CANDO
Want.pr (S BELIEVE (S WANT requestInstance))
Effect (H BELIEVE (S BELIEVE (S WANT )))
Preconditions Cando.pr
Want.pr
Effect
Figure
1. Denitions from the Plan-Based Theory of Speech Acts
While the successful completion of the Request ensures that the
hearer is aware of the speaker's desires, it is not enough in itself to
guarantee that the desired action is actually performed. This is because
the denition of Request only models the illocutionary force of the
act. It says nothing of the perlocutionary force. What is required is a
mediating act. Table 1 gives a denition of CauseToWant , which is an
example of such an act. By this denition, an agent will come to believe
it wants to do something if it believes that another agent believes it
wants to do it. This denition could clearly be extended by adding more
pre-conditions, perhaps to do with beliefs about social relationships or
power structures.
Using these ideas, and borrowing a formalism for representing the
mental state of agents that was developed by Robert Moore [31], Douglas
Appelt was able to implement a system that was capable of planning
to perform speech acts [3].
2.3. Speech Acts as Rational Action
While the plan-based theory of speech acts was a major step forward,
it was recognised that a theory of speech acts should be rooted in
a more general theory of rational action. This observation led Cohen
and Levesque to develop a theory in which speech acts were modelled
as actions performed by rational agents in the furtherance of their
intentions [13]. The foundation upon which they built this model of
rational action was their theory of intention, described in [12]. The for-
6 Michael Wooldridge
mal theory is too complex to describe here, but as a
avour, here is the
Cohen-Levesque denition of requesting, paraphrased in English [13,
p241]:
A request is an attempt on the part of spkr , by doing e, to bring
about a state where, ideally, (i) addr intends , (relative to the
spkr still having that goal, and addr still being helpfully inclined to
spkr ), and (ii) addr actually eventually does , or at least brings
about a state where addr believes it is mutually believed that it
wants the ideal situation.
Actions in the Cohen-Levesque framework were modelled using techniques
adapted from dynamic logic [23].
2.4. Agent Communication Languages: KQML and FIPA
Throughout the 1980s and 1990s, interest in multi-agent systems developed
rapidly [6, 41]. An obvious problem in multi-agent systems
is how to get agents to communicate with one-another | the inter-operability
issue referred to in the introduction. To this end, in the early
1990s, the darpa Knowledge Sharing Eort (kse) began to develop
the Knowledge Query and Manipulation Language (kqml) and the
associated Knowledge Interchange Format (kif) as a common frame-work
via which multiple expert systems (cf. agents) could exchange
knowledge [33, 30].
kqml is essentially an \outer" language for messages: it denes a
simple lisp-like format for messages, and 41 performatives, or message
types, that dene the intended meaning of a message. Example kqml
performatives include ask-if and tell. The content of messages was
not considered part of the kqml standard, but kif was also dened,
to express such content. kif is essentially classical rst-order predicate
logic, recast in a lisp-like syntax.
To better understand the kqml language, consider the following
example [30, p354]:
(ask-one
:content (PRICE IBM ?price)
:receiver stock-server
:language LPROLOG
:ontology NYSE-TICKS
The intuitive interpretation of this message is that the sender is asking
about the price of ibm stock. The performative is ask-one, which an
agent will use to ask a question of another agent where exactly one reply
Semantic Issues in Agent Communication 7
is needed. The various other components of this message represent its
attributes. The most important of these is the :content eld, which
species the message content. In this case, the content simply asks for
the price of ibm shares. The :receiver attribute species the intended
recipient of the message, the :language attribute species that the
language in which the content is expressed is called LPROLOG (the recipient
is assumed to \understand" LPROLOG), and the nal :ontology
attribute denes the terminology used in the message.
Formal denitions of the syntax of kqml and kif were developed
by the kse, but kqml lacked any formal semantics until Labrou and
Finin's [26]. These semantics were presented using a pre- and post-condition
closely related to Cohen and Perrault's plan-based
theory of speech acts [15]. These pre- and post-conditions were specied
by Labrou and Finin using a logical language containing modalities
for belief, knowledge, wanting, and intending. However, Labrou and
Finin recognised that any commitment to a particular semantics for
this logic itself would be contentious, and so they refrained from giving
it a semantics. However, this rather begs the question of whether their
semantics are actually well-founded. We return to this issue later.
The take-up of kqml by the multi-agent systems community was
signicant. However, Cohen and Levesque (among others) criticized
kqml on a number of grounds [14], the most important of which being
that, the language was missing an entire class of performatives |
commissives, by which one agent makes a commitment to another. As
Cohen and Levesque point out, it is di-cult to see how many multi-agent
scenarios could be implemented without commissives, which appear
to be important if agents are to coordinate their actions with
one-another [25].
In 1995, the Foundation for Intelligent Physical Agents (fipa) began
its work on developing standards for agent systems. The centrepiece of
this initiative is the development of an acl [19] 2 . This acl is supercially
similar to kqml: it denes an \outer" language for messages,
it denes 20 performatives (such as inform) for dening the intended
interpretation of messages, and it does not mandate any specic language
for message content. In addition, the concrete syntax for fipa
acl messages closely resembles that of kqml. Here is an example of a
fipa acl message (from [19, p10]):
(inform
:sender agent1
simply refer to their acl as \acl", which can result in confusion when
discussing acls in general. To avoid ambiguity, we will always refer to \the fipa
acl".
8 Michael Wooldridge
:receiver agent2
:content (price good2 150)
:language sl
:ontology hpl-auction
Even a supercial glance conrms that the fipa acl is similar to kqml;
the relationship is discussed in [19, pp68{69].
The fipa acl has been given a formal semantics, in terms of a Semantic
Language (sl). The approach adopted for dening these semantics
draws heavily on [13], but in particular on Sadek's enhancements to
this work [9]. sl is a quantied multi-modal logic, which contains modal
operators for referring to the beliefs, desires, and uncertain beliefs of a-
gents, as well as a simple dynamic logic-style apparatus for representing
agent's actions. The semantics of the fipa acl map each acl message
to a formula of sl, which denes a constraint that the sender of the
message must satisfy if it is to be considered as conforming to the fipa
acl standard. fipa refer to this constraint as the feasibility condition.
The semantics also map each message to an sl-formula which denes
the rational eect of the action. The rational eect of a messages is its
purpose: what an agent will be attempting to achieve in sending the
message (cf. perlocutionary act). However, in a society of autonomous
agents, the rational eect of a message cannot (and should not) be
guaranteed. Hence conformance does not require the recipient of a
message to respect the rational eect part of the acl semantics |
only the feasibility condition.
To illustrate the fipa approach, we give an example of the semantics
of the fipa inform performative [19, p25]:
(1)
The B i is a modal connective for referring to the beliefs of agents (see
e.g., [21]); Bif is a modal connective that allows us to express whether
an agent has a denite opinion one way or the other about the truth
or falsity of its parameter; and U is a modal connective that allows us
to represent the fact that an agent is \uncertain" about its parameter.
Thus an agent i sending an inform message with content ' to agent j
will be respecting the semantics of the fipa acl if it believes ', and it
it not the case that it believes of j either that j believes whether ' is
true or false, or that j is uncertain of the truth or falsity of '.
fipa recognise that \demonstrating in an unambiguous way that a
given agent implementation is correct with respect to [the semantics]
Semantic Issues in Agent Communication 9
is not a problem which has been solved" [19, p46], and identify it as
an area of future work. (Checking that an implementation respects the
syntax of an acl like kqml or fipa is, of course, trivial.) If an agent
communication language such as fipa's acl is ever to be widely used
| particularly for such sensitive applications as electronic commerce
| then such conformance testing is obviously crucial. However, the
problem of conformance testing (verication) is not actually given a
concrete denition in [19], and no indication is given of how it might
be done. In short, the aim of the remainder of this article is to unambiguously
dene what it means for an agent communication language
such as that dened by fipa to be veriable, and then to investigate
the issues surrounding such verication.
3. Agent Communication Frameworks
In this section, we present an abstract framework that allows us to
precisely dene the veriable acl semantics problem. First, we will
assume that we have a set Ag ng of agent names | these are
the unique identiers of agents that will be sending messages to one
another in a system.
We shall assume that agents communicate using a communication
language L C . This acl may be kqml together with kif [26], it may be
the fipa-97 communication language [19], or some other proprietary
language. The exact nature of L C is not important for our purposes.
The only requirements that we place on L C are that it has a well-
dened syntax and a well-dened semantics. The syntax identies a
set w (L C ) of well-formed formulae of L C | syntactically acceptable
constructions of L C . Since we usually think of formulae of L C as being
messages, we use (with annotations: to stand for members
of w (L C ).
The semantics of L C are assumed to be dened in terms of a second
language L S , which we shall call the semantic language. The idea is that
if an agent sends a message, then the meaning of sending this message
is dened by a formula of L S . This formula denes what fipa [19,
p48] refer to as the feasibility pre-condition | essentially, a constraint
that the sender of the message must satisfy in order to be regarded
as being \sincere" in sending the message. For example, the feasibility
pre-condition for an inform act would typically state that the sender
of an inform must believe the content of the message, otherwise the
sender is not being sincere.
Michael Wooldridge
The idea of dening the semantics of one language in terms of another
might seem strange, but the technique is common in computer
science:
when Hoare-logic style semantics are given for programming lan-
guages, the semantics of a program written in, for example, pascal
or c are dened in terms of a second language | that of classical
rst-order logic [24];
an increasingly common approach to dening the semantics of
many programming languages is to give them a temporal seman-
tics, whereby the semantics of a program in a language such as c
or pascal are dened as a formula of temporal logic [28].
Note that in this article we are not concerned with the eects that
messages have on recipients. This is because although the \rational
eect" of a message on its recipient is the reason that the sender will
send a message (e.g., agent i informs agent j of ' because i wants j
to believe '), the sender can have no guarantee that the recipient will
even receive the message, still less that it will have the intended eect.
The key to our notion of semantics is therefore what properties must
hold of the sender of a message, in order that it can be considered to
be sincere in sending it.
Formally, the semantics of the acl L C are given by a function
which maps a single message of L C to a single formula of L S ,
which represents the semantics of . Note that the \sincerity condition"
acts in eect like a specication (in the software
engineering sense), which must be satised by any agent that claims to
conform to the semantics. Verifying that an agent program conforms to
the semantics is thus a process of checking that the program satises
this specication.
To make the idea concrete, recall the fipa semantics of inform
messages, given in (1), above. In our framework, we can express the
fipa semantics as
It should be obvious how this corresponds to the fipa denition.
In order that the semantics of L C be well-dened, we must also
have a semantics for our semantic language L S itself. While there is
no reason in principle why we should not dene the semantics of L S in
terms of a further language L S 0 , (and so on), we assume without loss
Semantic Issues in Agent Communication 11
of generality that the semantics of L S are given with respect to a class
logical models for L S . More precisely, the semantics of L S
will be dened via a satisfaction relation \j= S ", where
By convention, if M 2 mod(L S ) and ' 2 w (L S ) then we write M
' to indicate that ('; M ) 2 then we read this as \'
is satised (or equivalently, is true) in M ". The meaning of a formula
' of L S is then the set of models in which ' is satised. We dene a
function
such that if ' 2 w (L S ), then is the set of models in which ' is
Agents are assumed to be implemented by programs, and we let
stand for the set of all such agent programs. For each agent i 2 Ag ,
we assume that i 2 is the program that implements it. For our
purposes, the contents of are not important | they may be java, c,
or c++ programs, for example. At any given moment, we assume that
a program i may be in any of a set L i of local states. The local state
of a program is essentially just a snapshot of the agent's memory at
some instant in time. As an agent program i executes, it will perform
operations (such as assignment statements) that modify its state. Let
i2Ag L i be the set of all local states. We use l (with annotations:
to stand for members of L.
One of the key activities of agent programs is communication: they
send and receive messages, which are formulae of the communication
language L C . We assume that we can identify when an agent emits
such a message, and write send( l ) to indicate the fact that agent
implemented by program i 2 , sends a message 2 L C when
in state l 2 L i .
We now dene what we mean by the semantics of an agent program.
Intuitively, the idea is that when an agent program i is in state l , we
must be able to characterise the properties of the program as a formula
of the semantic language L S . This formula is the theory of the program.
In theoretical computer science, the derivation of a program's theory is
the rst step to reasoning about its behaviour. In particular, a program
theory is the basis upon which we can verify that the program satises
its specication. Formally, a program semantics is a function that maps
a pair consisting of an agent program and a local state to a formula
Michael Wooldridge
Programs/state | L
language | L Sn
Communication language | L C
Model structures for L S | mod(L S )1
Figure
2. The components of an agent communication framework.
L S of the semantic language. Note that the semantics of must be
dened in terms of the same semantic language that was used to dene
the semantics of L C | otherwise there is no point of reference between
the two. Formally then, a semantics for agent program/state pairs is a
function
The relationships between the various formal components introduced
above are summarised in Figure 2. We now collect these various components
together and dene what we mean by an agent communication
framework.
DEFINITION 1. An agent communication framework is a (2n
tuple:
ng is a non-empty set of agents, i 2 is an agent
program, L i is the set of local states of i , L
communication language, L is a semantic language,
and is a semantics for .
We let F be the set of all such agent communication frameworks, and
use f (with annotations: f to stand for members of F .
Semantic Issues in Agent Communication 13
4. Veriability Dened
We are now in a position to dene what it means for an agent program,
in sending a message while in some particular state, to be respecting
the semantics of a communication framework. Recall that a communication
language semantics denes, for each message, a constraint, or
specication, which must be satised by the sender of the message if
it is to be considered as satisfying the semantics of the communication
language. The properties of a program when in some particular state
are given by the program semantics, This leads to the following
denition.
DEFINITION 2. Suppose
is an agent communication framework, and that send(
to respect the semantics
of framework f (written
Note that the problem could equivalently have been phrased in terms
of logical consequence: is an L S -logical consequence
of . If we had a sound and complete proof system ' S for L S ,
then we could similarly have phrased it as a proof problem:
. The rst approach, however, is probably the most
general.
Using this denition, we can dene what it means for a communication
framework to have a veriable semantics.
DEFINITION 3. An agent communication framework
is veriable i it is a decidable question whether
arbitrary i , l , .
The intuition behind veriability is as follows: if an agent communication
framework enjoys this property, then we can determine whether
or not an agent is respecting the framework's communication language
semantics whenever it sends a message.
If a framework is veriable, then we know that it is possible in principle
to determine whether or not an agent is respecting the semantics
of the framework. But a framework that is veriable in principle is
not necessarily veriable in practice. This is the motivation behind the
following denition.
14 Michael Wooldridge
DEFINITION 4. An agent communication framework f 2 F is said to
be practically veriable i it is decidable whether
polynomial in jf j jj jj jl j.
If we have a practically veriable framework, then we can do the
verication in polynomial time, which implies that we have at least
some hope of doing automatic verication using computers that we can
envisage today. Our ideal, when setting out an agent communication
should clearly be to construct f such that it is practically
veriable. However, practical veriability is quite a demanding proper-
ty, as we shall see in section 5. In the following subsection, we examine
the implications of these denitions.
4.1. What does it mean to be Verifiable?
If we had a veriable agent communication framework, what would it
look like? Let us take each of the components of such a framework in
turn. First, our set Ag of agents, implemented by programs i , (where
these programs are written in an arbitrary programming language).
This is straightforward: we obviously have such components today.
Next, we need a communication language L C , with a well-dened syntax
and semantics, where the semantics are given in terms of L S , a semantic
language. Again, this is not problematic: we have such a language L C
in both kqml and the fipa-97 language. Taking the fipa case, the
semantic language is sl, a quantied multi-modal logic with equality.
This language in turn has a well dened syntax and semantics, and
so next, we must look for a program semantics . At this point, we
encounter problems.
Put simply, the fipa semantics are given in terms of mental states,
and since we do not understand how such states can be systematically
attributed to programs, we cannot verify that such programs respect
the semantics. More precisely, the semantics of sl are given in the
normal modal logic tradition of Kripke (possible worlds) semantics,
where each agent's \attitudes" (belief, desire, . ) are characterised as
relations holding between dierent states of aairs. Although Kripke
semantics are attractive from a mathematical perspective, it is important
to note that they are not connected in any principled way with
computational systems. That is, for any given
a java program), there is no known way of attributing to that program
an sl formula (or, equivalently, a set of sl models), which characterises
it in terms of beliefs, desires, and so on. Because of this, we say that sl
(and most similar logics with Kripke semantics) are ungrounded | they
have no concrete computational interpretation. In other words, if the
semantics of L S are ungrounded (as they are in the fipa-97 sl case),
Semantic Issues in Agent Communication 15
then we have no semantics for programs | and hence an unveriable
communication framework. Although work is going on to investigate
how arbitrary programs can be ascribed attitudes such as beliefs and
desires, the state of the art ([8]) is considerably behind what would be
required for acl verication. Other researchers have also recognised
this di-culty [39, 34].
Note that it is possible to choose a semantic language L S such
that a principled program semantics can be derived. For example,
temporal logic has long been used to dene the semantics of programming
languages [29]. A temporal semantics for a programming language
denes for every program a temporal logic formula characterising the
meaning of that program. Temporal logic, although ultimately based
on Kripke semantics, is rmly grounded in the histories traced out by
programs as they execute | though of course, standard temporal logic
makes no reference to attitudes such as belief and desire. Also note that
work in knowledge theory has shown how knowledge can be attributed
to computational processes in a systematic way [17]. However, this
work gives no indication of how attitudes such as desiring or intending
might be attributed to arbitrary programs. (We use techniques from
knowledge theory to show how a grounded semantics can be given to a
communication language in Example 2 of section 5.)
Another issue is the high computational complexity of the veri-
cation process itself [32]. Ultimately, determining whether an agent
implementation is respecting the semantics of a communication frame-work
reduces to a logical proof problem, and the complexity of such
problems is well-known. If the semantic language L S of a framework f
is equal in expressive power to rst-order logic, then f is of course not
veriable. For quantied multi-modal logics, (such as that used by fipa
to dene the semantics of their acl), the proof problem is often much
harder than this | proof methods for quantied multi-modal logics
are very much at the frontiers of theorem-proving research (cf. [1]). In
the short term, at least, this complexity issue is likely to be another
signicant obstacle in the way of acl verication.
To sum up, it is entirely possible to dene a communication language
LC with semantics in terms of a language L S . However, giving
a program semantics for a semantic language (such as that of fipa-97)
with ungrounded semantics is a serious unsolved problem.
5. Example Frameworks
To illustrate the idea of verication, as introduced above, in this section
we will consider a number of progressively richer agent communica-
Michael Wooldridge
tion frameworks. For each of these frameworks, we discuss the issue
of veriability, and where possible, characterise the complexity of the
verication problem.
5.1. Example 1: Classical Propositional Logic.
For our rst example, we dene a simple agent communication frame-work
f 1 in which agents communicate by exchanging formulae of classical
propositional logic. The intuitive semantics of sending a message
' is that the sender is informing other agents of the truth of '. An
agent sending out a message ' will be respecting the semantics of the
language if it \believes" (in a sense that we precisely dene below)
that ' is true. An agent will not be respecting the semantics if it
sends a message that it \believes" to be false. We also assume that
agent programs exhibit a simple behaviour of sending out all messages
that they believe to be true. We show that framework f 1 is veriable,
and that in fact every agent program in this framework respects the
semantics of f 1 .
Formally, we must dene the components of a framework
These components are as follows. First, Ag is some arbitrary non-empty
set | the contents are not signicant. Second, since agents communicate
by simply exchanging messages that are simply formulae of
classical propositional logic, L 0 , we have L . Thus the set w (L 0 )
contains formulae made up of the proposition symbols
combined into formulae using the classical connectives \:" (not), \^"
(and), \_" (or), and so on.
We let the semantic language L S also be classical propositional logic,
and dene the L C semantic function simply as the identity function:
. The semantic function
the usual propositional denotation function | the denition is entirely
standard, and so we omit it in the interests of brevity.
An agent i 's state l i is dened to be a set of formulae of propositional
logic, hence L is assumed to
simply implement the following rule:
In other words, an agent program i sends a message when in state l
i is present in l . The semantics of agent programs are then dened
as follows:
Semantic Issues in Agent Communication 17
In other words, the meaning of a program in state l is just the conjunction
of formulae in l . The following theorem sums up the key properties
of this simple agent communication framework.
THEOREM 1.
1. Framework f 1 is veriable.
2. Every agent in f 1 does indeed respect the semantics of f 1 .
Proof. For (1), suppose that send(
g. Then i is respecting the semantics for f 1 i
which by the f 1 denitions of reduces to
But this is equivalent to showing that is an L 0 -logical consequence
of logical consequence is obviously a decidable
problem, we are done. For (2), we know from equation (2) that
l . Since is clearly a logical consequence of l if
, we are done.
An obvious next question is whether f 1 is practically veriable, i.e.,
whether verication can be done in polynomial time. Here, observe that
verication reduces to a problem of determining logical consequence in
which reduces to a test for L 0 -validity, and hence in turn to L 0 -
unsatisability. Since the L 0 -satisability problem is well-known to be
np-complete, we can immediately conclude the following.
THEOREM 2. The f 1 verication problem is co-np-complete.
Note that co-np-complete problems are ostensibly harder than merely
np-complete problems, from which we can conclude that practical
verication of f 1 is highly unlikely to be possible 3 .
5.2. Example 2: Grounded Semantics for Propositional
Logic.
One could argue that Example 1 worked because we made the assumption
that agents explicitly maintain databases of L 0 formulae: checking
whether an agent was respecting the semantics in sending a message '
3 In fact, f 2 will be practically veriable if and only if which is regarded
as extremely unlikely [32].
Michael Wooldridge
amounted to determining whether ' was a logical consequence of this
database. This was a convenient, but, as the following example
trates, unnecessary assumption. For this example, we will again assume
that agents communicate by exchanging formulae of classical propositional
logic L 0 , but we make no assumptions about their programs or
internal state. We show that despite this, we can still obtain a veriable
semantics, because we can ground the semantics of the communication
language in the states of the program. There is an impartial, objective
procedure we can apply to obtain a declarative representation of the
\knowledge" implicit within an arbitrary program, in the form of Fagin-
Halpern-Moses-Vardi knowledge theory [17]. To check whether an agent
is respecting the semantics of the communication language, we simply
check whether the information in the message sent by the agent is a
logical consequence of the knowledge implicit within the agent's state,
which we obtain using the tools of knowledge theory.
In what follows, we assume all sets are nite. As in Example 1, we
set both the communication language L C and the semantic language
L S to be classical propositional logic L 0 . We require some additional
denitions (see [17, pp103{114] for more details). Let the set G of global
states of a system be dened by We use g (with
annotations: to stand for members of G . We assume that we
have a vocabulary primitive propositions to express
the properties of a system. In addition, we assume it is possible to
determine whether or not any primitive proposition p 2 is true of
a particular global state or not. We write g j= p to indicate that p is
true in state g . Next, we dene a relation i G L i for each agent
Ag to capture the idea of indistinguishability. The idea is that if
an agent i is in state l 2 L i , then a global state
indistinguishable from the state l that i is currently in (written g i l )
. Now, for any given agent program i in local state l , we dene
the positive knowledge set of i in l , (written ks to be the set of
propositions that are true in all global states that are indistinguishable
from l , and the negative knowledge set of i in l , (written ks
to be the set of propositions that are false in all global states that are
indistinguishable from l . Formally,
ks
ks
Readers familiar with epistemic logic [17] will immediately recognise
that this construction is based on the denition of knowledge in distributed
systems. The idea is that if p 2 ks
ks given the information that i has available in
state l , p must necessarily be true (respectively, false). Thus ks
Semantic Issues in Agent Communication 19
represents the set of propositions that the agent i knows are true when
it is in state l ; and ks represents the set of propositions that i
knows are false when it is in state l .
The L C semantic function is dened to be the identity function
again, so For the program semantics, we dene
The formula thus encodes the knowledge that the program i
has about the truth or falsity of propositions when in state l . The L S
semantic function is assumed to be the standard L 0 semantic func-
tion, as in Example 1. An agent will thus be respecting the semantics
of the communication framework if it sends a message such that this
message is guaranteed to be true in all states indistinguishable from
the one the agent is currently in. This framework has the following
property.
THEOREM 3. Framework f 2 is veriable.
Proof. Suppose that send( arbitrary i , , l . Then i is
respecting the semantics for f 2 i
which by the f 2 denitions of reduces to
Computing G can be done in time O(jL 1 L n j); computing i
can be done in time O(jL i j jG j); and given G and i , computing
ks ks can be done in time O(jj jG j). Once given
ks ks (; l ), determining whether
reduces to the L 0 logical consequence problem
This problem is obviously decidable.
verication reduces to L 0 logical consequence checking, we can
use a similar argument to that used for Theorem 2 to show the problem
is in general no more complex than f 1 verication:
Michael Wooldridge
THEOREM 4. The f 2 verication problem is co-np-complete.
Note that the main point about this example is the way that the
semantics for programs were grounded in the states of programs. In
this example, the communication language was simple enough to make
the grounding easy. More complex communication languages with a
similarly grounded semantics are possible. We note in closing that
it is straightforward to extend framework f 2 to allow a much richer
agent communication language (including requesting, informing, and
commissives) [40].
5.3. Example 3: The fipa-97 acl.
For the nal example, consider a framework f 3 in which we use the
fipa-97 acl, and the semantics for this language dened in [19]. Following
the discussion in section 4.1, it should come as no surprise that
such a framework is not veriable. It is worth spelling out the reasons
for this. First, since the semantic language sl is a quantied multi-modal
logic, with greater expressive power than classical rst order
logic, it is clearly undecidable. (As we noted above, the complexity of
the decision problem for quantied modal logics is often much harder
than for classical predicate logic [1].) So the f 3 verication problem is
obviously undecidable. But of course the problem is worse than this,
since as the discussion in section 4.1 showed, we do not have any idea
of how to assign a program semantics for semantic languages like sl,
because these languages have an ungrounded, mentalistic semantics.
6. Verication via Model Checking
The problem of verifying whether an agent implements the semantics
of a communication language has thus far been presented as one of
determining logical consequence, or, equivalently, as a proof problem.
Readers familiar with verication from theoretical computer science
will recognise that this corresponds to the \traditional" approach to
verifying that a program satises a specication. Other considerations
aside, a signicant drawback to proof theoretic verication is the problem
of computational complexity. As we saw above, even if the semantic
language is as impoverished as classical propositional logic, verica-
tion will be co-np-complete. In reality, logics for verication must be
considerably more expressive than this.
Problems with the computational complexity of verication logics
led researchers in theoretical computer science to investigate other approaches
to formal verication. The most successful of these is model
Semantic Issues in Agent Communication 21
checking [27, 22, 10]. The idea behind model checking is as follows.
Recall that in proof theoretic verication, to verify that a program i
has some property ' when in state l , we derive the theory of that
program and attempt to establish i.e., that property
' is a theorem of the theory In temporal semantics, for
example [28, 29], is a temporal logic formula such that the
models of this formula correspond to all possible runs of the program
In contrast, model checking approaches work as follows. To determine
whether or not i has property ' when in state l , we proceed as
follows:
, and from them generate a model M i ;l that encodes all
the possible computations of .
Determine whether or not M i ;l whether the formula '
is valid in M i ;l ; the program i has property ' in state l just in
case the answer is \yes".
In order to encode all computations of the program, the model generated
in the rst stage will be a branching time temporal model [16].
Intuitively, each branch, (or path), through this model will correspond
to one possible execution of the program. Such a model can be generated
automatically from the text of a program in a typical imperative
programming language.
The main advantage of model checking over proof theoretic ver-
ication is in complexity: model checking using the branching time
temporal logic ctl [11] can be done in time O(j'j jM j), where j'j is
the size of the formula to be checked, and jM j is the size of the model
(i.e., the number of states it contains) [16]. Model-checking approaches
have recently been used to verify nite-state systems with up to 10 120
states [10].
Using a model checking approach to conformance testing for acls,
we would dene the program semantics as a function
which assigns to every program/state pair an L S -model, which encodes
the properties of that program/state pair. Verifying that
would involve checking whether whether the
sincerity condition was valid in model
The comparative e-ciency of model checking is a powerful argument
in favour of the approach. Algorithms have been developed for
(propositional) belief-desire-intention logics that will take a model and
22 Michael Wooldridge
a formula and will e-ciently determine whether or not the formula is
satised in that model [35, 5]. These belief-desire-intention logics are
closely related to those used to give a semantics to the fipa-97 acl.
However, there are two unsolved problems with such an approach.
The rst problem is that of developing the program semantics
We have procedures that, given a program, will generate a branching
temporal model that encode all computations of that program. Howev-
er, these are not the same as models for belief-desire-intention logics.
Put simply, the problem is that we do not yet have any techniques for
systematically assigning beliefs, desires, intentions, and uncertainties
(as in the fipa-97 sl case [19]) to arbitrary programs. This is again
the problem of grounding that we referred to above. As a consequence,
we cannot do the rst stage of the model checking process for acls
that have (ungrounded) fipa-like semantics.
The second problem is that model checking approaches have been
shown to be useful for systems that can be represented as nite state
models using propositional temporal logics. If the verication logic allows
arbitrary quantication, (or the system to be veried is not nite
state), then a model checking approach is unlikely to be practicable.
To summarise, model checking approaches appear to have considerable
advantages over proof-theoretic approaches to verication with
respect to their much reduced computational complexity. However,
as with proof-theoretic approaches, the problem of ungrounded acl
semantics remains a major problem, with no apparent route of attack.
Also, the problem of model checking with quantied logics is an as-yet
untested area. Nevertheless, model checking seems a promising
direction for acl conformance testing.
7. Discussion
If agents are to be as widely deployed as some observers predict, then
the issue of inter-operation | in the form of standards for communication
languages | must be addressed. Moreover, the problem of
determining conformance to these standards must also be seriously
considered, for if there is no way of determining whether or not a system
that claims to conform to a standard does indeed conform to it, then the
value of the standard itself must be questioned. This article has given
the rst precise denition of what it means for an agent communication
framework to be veriable, and has identied some problematic issues
for veriable communication language semantics, the most important
of which being that:
Semantic Issues in Agent Communication 23
We must be able to characterise the properties of an agent program
as a formula of the language L S used to give a semantics
to the communication language. L S is often a multi-modal logic,
referring to (in the fipa-97 case, for example) the beliefs, desires,
and uncertainties of agents. We currently have very little idea
about systematic ways of attributing such mentalistic descriptions
to programs | the state of the art is considerably behind what
would be needed for anything like practical verication, and this
situation is not likely to change in the near future.
The computational complexity of logical verication, (particularly
using quantied multi-modal languages), is likely to prove a major
obstacle in the path of practical agent communication language
verication. Model checking approaches appear to be a promising
alternative.
In addition, the article has given examples of agent communication
frameworks, some of which are veriable by this denition, others of
which, (including the fipa-97 acl [19]), are not.
The results of this article could be interpreted as negative, in that
they imply that verication of conformance to acls using current techniques
is not likely to be possible. However, the article should emphatically
not be interpreted as suggesting that standards | particularly,
standardised acls | are unnecessary or a waste of time. If agent technology
is to achieve its much vaunted potential as a new paradigm for
software construction, then such standards are important. However, it
may well be that we need new ways of thinking about the semantics
and verication of such standards. A number of promising approaches
have recently appeared in the literature [39, 34, 40]. One approach
that can work eectively in certain cases is mechanism design [36].
The basic idea is that in certain multi-agent scenarios (auctions are a
well-known example), it is possible to design an interaction protocol
so that the dominant strategy for any participating agent is to tell
the truth. Vickrey's mechanism is probably the best-known example of
such a technique [37]. In application domains where such techniques are
feasible, they can be used to great eect. However, most current multi-agent
applications do not lend themselves to such techniques. While
there is therefore great potential for the application of mechanism design
in the long term, in the short term it is unlikely to play a major
role in agent communication standards.
Michael Wooldridge
--R
Readings in Planning.
Planning English Sentences.
How to Do Things With Words.
Readings in Distributed Arti
The Correctness Problem in Computer Science.
Reasoning About Knowledge.
The Temporal Logic of Reactive and Concurrent Systems.
Temporal Veri
Computational Complexity.
Rules of Encounter: Designing Conventions for Automated Negotiation among Computers.
Speech Acts: An Essay in the Philosophy of Language.
--TR
--CTR
Peter McBurney , Simon Parsons, Locutions for Argumentation in Agent Interaction Protocols, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.1240-1241, July 19-23, 2004, New York, New York
Guido Boella , Rossana Damiano , Joris Hulstijn , Leendert van der Torre, Role-based semantics for agent communication: embedding of the 'mental attitudes' and 'social commitments' semantics, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
Paurobally , Jim Cunningham , Nicholas R. Jennings, A formal framework for agent interaction semantics, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Stefan Poslad , Patricia Charlton, Standardizing agent interoperability: the FIPA approach, Mutli-agents systems and applications, Springer-Verlag New York, Inc., New York, NY, 2001
Ulle Endriss , Nicolas Maudet, On the Communication Complexity of Multilateral Trading: Extended Report, Autonomous Agents and Multi-Agent Systems, v.11 n.1, p.91-107, July 2005
Peter McBurney , Simon Parsons, Posit spaces: a performative model of e-commerce, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Peter McBurney , Simon Parsons, A Denotational Semantics for Deliberation Dialogues, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.86-93, July 19-23, 2004, New York, New York
Yuh-Jong Hu, Some thoughts on agent trust and delegation, Proceedings of the fifth international conference on Autonomous agents, p.489-496, May 2001, Montreal, Quebec, Canada
Peter McBurney , Simon Parsons , Michael Wooldridge, Desiderata for agent argumentation protocols, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1, July 15-19, 2002, Bologna, Italy
Peter McBurney , Simon Parsons, Games That Agents Play: A Formal Framework for Dialoguesbetween Autonomous Agents, Journal of Logic, Language and Information, v.11 n.3, p.315-334, Summer 2002
Rogier M. Van Eijk , Frank S. De Boer , Wiebe Van Der Hoek , John-Jules Ch. Meyer, A Verification Framework for Agent Communication, Autonomous Agents and Multi-Agent Systems, v.6 n.2, p.185-219, March
Peter Mcburney , Rogier M. Van Eijk , Simon Parsons , Leila Amgoud, A Dialogue Game Protocol for Agent Purchase Negotiations, Autonomous Agents and Multi-Agent Systems, v.7 n.3, p.235-273, November
Brahim Chaib-Draa , Marc-Andr Labrie , Mathieu Bergeron , Philippe Pasquier, DIAGAL: An Agent Communication Language Based on Dialogue Games and Sustained by Social Commitments, Autonomous Agents and Multi-Agent Systems, v.13 n.1, p.61-95, July 2006
N. Maudet , B. Chaib-Draa, Commitment-based and dialogue-game-based protocols: new trends in agent communication languages, The Knowledge Engineering Review, v.17 n.2, p.157-179, June 2002 | semantics;conformance testing;verification;agent communication languages;standards |
608645 | Emergent Properties of a Market-based Digital Library with Strategic Agents. | The University of Michigan Digital Library (UMDL) is designed as an open system that allows third parties to build and integrate their own profit-seeking agents into the marketplace of information goods and services. The profit-seeking behavior of agents, however, risks inefficient allocation of goods and services, as agents take strategic stances that might backfire. While it would be good if we could impose mechanisms to remove incentives for strategic reasoning, this is not possible in the UMDL. Therefore, our approach has instead been to study whether encouraging the other extrememaking strategic reasoning ubiquitousprovides an answer.Toward this end, we have designed a strategy (called the p-strategy) that uses a stochastic model of the market to find the best offer price. We have then examined the collective behavior of p-strategy agents in the UMDL auction. Our experiments show that strategic thinking is not always beneficial and that the advantage of being strategic decreases with the arrival of equally strategic agents. Furthermore, a simpler strategy can be as effective when enough other agents use the p-strategy. Consequently, we expect the UMDL is likely to evolve to a point where some agents use simpler strategies and some use the p-strategy. | Introduction
When building a multiagent system, a designer (or a group
of designers) has to worry about two issues: mechanism
design (which dictates the way that agents interact), and
individual-agent design. Of course, these two design issues
are interdependent; a well-designed mechanism can
simplify the design of individual agents (and vice versa).
For instance, Vickery's auction mechanism (Vickery, 1961)
makes rational agents bid their true reservation prices such
that even self-interested agents, if they are rational, will
behave honestly (and not try in vain to outsmart other
However, designing a good mechanism that exhibits
certain properties-which is called incentive engineering
or mechanism design (Rosenschein & Zlotkin, 1994)-is
difficult, especially for dynamic systems where the
participants and their interactions evolve over time. The
example of such dynamic systems that we use throughout
this paper is the University of Michigan Digital Library
(UMDL). In the UMDL project, we aim to provide an
infrastructure for rendering library services in a networked
information environment (Durfee et al., 1998). We have
designed the UMDL as a multiagent system, where agents
(representing users, collections, and services of the digital
library) sell and buy information goods and services
through auctions. While supporting flexibility and
scalability, the open multiagent architecture and market
infrastructure create dynamics (agents participating in an
auction change, matches between buyers and sellers vary,
and auctions themselves evolve), which adds additional
complexity to mechanism design.
As system architects, we strive for an efficient system.
Although the UMDL market allows the self-interested
agents to seek profits, we do not want strategic agents to
undermine the overall system performance (efficiency in
market), nor such agents to reap profits against other agents
(efficiency in allocation). That is, we want an incentive-compatible
mechanism that makes strategic reasoning
unnecessary (Zlotkin & Rosenschein, 1996; Wellman,
1993). Unfortunately, we do not have such a mechanism
yet for the UMDL system; we fully expect strategic agents
who try to take advantage of other agents to emerge in the
system.
So, does it mean the UMDL will become inefficient?
(and should) a UMDL agent spend much of its
computational power trying to outsmart other agents? What
happens if all the agents behave strategically? In this paper,
we answer the above questions, by studying the properties
of the UMDL with strategic agents. Instead of developing a
auction mechanism may be inappropriate for certain
settings. For limitations of Vickery auctions, see the work by
Sandholm (Sandholm, 1996).
mechanism that prevents strategic thinking (which is hard),
we use a bottom-up approach: we design a strategy that the
UMDL agents may use and experiment with such strategic
agents to learn about the system properties. In particular,
we are interested in whether making strategic reasoning
ubiquitous (instead of preventing it) reduces its negative
effects.
In the following, we review some of the previous work
on multiagent system design issues and briefly describe the
target system, the UMDL service market society. We
explain a strategy called p-strategy and demonstrate its
advantages over other simpler strategies. Then, by
experimenting with multiple p-strategy agents, we
investigate some emergent properties of the UMDL
system.
Related Work
A multiagent system can be designed to exhibit certain
desirable properties. A single designer (or a group of
designers sharing common goals) can calibrate the system
to follow its goals (Briggs & Cook, 1995; Shoham &
Tennenholtz, 1995). For instance, agents and mechanism
can be built in certain ways (e.g., share information, be
honest, and so on) conducive to cooperation. Social laws
and conventions, however, are unsuitable for the UMDL
where agent designers do not share common goals and
system architects cannot impose limitations on the
individual, self-interested agents.
When designing multiagent systems with self-interested
agents, many researchers have turned to game theory to
lead systems to desired behaviors (e.g., discourage agents
from spending time and computation trying to take
advantage of others) (Sandholm & Lesser, 1995; Brafman
1996). For instance, Rosenschein and
Zlotkin have identified two building blocks of a multiagent
system, protocol and strategy (which are mechanism and
individual agent in our terminology, respectively), and
focused on designing a protocol which ".motivates agents
towards telling the truth." (Rosenschein & Zlotkin, 1994).
Unfortunately, game theory tends to be applied to highly
abstract and simplified settings. We do not use game theory
because the UMDL is too complex to model and because
we cannot assume rationality in agents.
Another approach to designing multiagent systems with
self-interested agents is to let manipulation from agents
happen and live with it. This may sound like bad
engineering, but preventing strategic behavior of individual
agents is unrealistic for many complex systems. Instead, by
studying how the individual, strategic agents impact the
overall system behavior, we gain insights on properties of
agent societies, such as characterizing the types of
environments and agent populations that foster social and
anti-social behavior (Vidal & Durfee, 1996; Hu &
Wellman, 1996). Our approach falls into this category, and
our goal is to explain how strategic agents affect the
UMDL in terms of market and allocation efficiency.
The UMDL Service Market Society
The UMDL Service Market Society (SMS) is a market-based
multiagent system where agents buy and sell goods
and services from each other. Instead of relying solely on
internally-designed agents, UMDL can attract outside
agents to provide new services, who are motivated by the
long-term profit they might accrue by participating in the
system. Since the UMDL is open, we treat all agents as
selfish.
Selling and buying of services are done through auction
markets, operated by auction agents. Figure 1 shows an
example of the UMDL auction, where User Interface
Agents (UIAs) want to find sources of information for
some topic (say, science) on behalf of certain kinds of
users (say, high school), and some Query Planning Agents
(QPAs) sell the services for finding such collections. Due
to space limitations, we ignore the issues of how to
describe what agents buy or sell, how to locate the right
auction to participate in, when and how to create an
auction, and so on. Interested readers may refer to (Durfee
et al., 1998).
At present, the UMDL uses the AuctionBot software
(Mullen & Wellman, 1996) parameterized for double
Sellers Buyers
QPA-n
UIA-m
Auction
Auction
for
QPA-hs-sci
sell-offers buy-offers
Figure
1: The UMDL auction for "QPA-service_high-school_science"
auctions where agents post both sell prices and buy prices.
Compared to double auctions used in some economic
models (Friedman and Rust, 1993), the UMDL auction is a
continuously-clearing double auction, well-suited for
frequent, timely transactions needed in information
economies. In the UMDL auction, a transaction is
completed as soon as a buy offer and a sell offer cross
(without waiting for the remaining agents to submit their
bids), and the clearing price is determined for each
transaction (rather than being set at some medium price
among bids). In the UMDL, the auction agent continuously
matches the highest buyer to the lowest seller, given that
the buy price is greater than the sell price. The clearing
price is based on the seller's offer price (i.e., consumers
receive all the surplus).
Since buyers (sellers) with bid prices higher (lower)
than any standing sell (buy) offer get matched, the buyers
in the auction (if any) always have lower offer prices than
the sellers (if any). That is, standing offers ordered by
lowest to highest bid are always in a (bbb.bsss.s)
sequence, as shown in Figure 2. To manage the size of the
auction, we may limit the standing offers in the auction.
The auction used in our experiments limits the number of
buy and sell offers not to exceed five each. When an
additional buyer (seller) arrives, the seller (buyer) with the
highest (lowest) offer will be kicked out first.
A Strategy based on Stochastic Modeling
Agents placed in the UMDL SMS want to maximize their
profits by increasing the possible matches and the profit
per match. In this section, we present a bidding strategy
that agents may use to maximize their profits, and examine
the performance of the seller 2 with such a strategy against
other types of sellers.
The p-strategy
We have developed an agent strategy (called p-strategy)
that finds the best offer price for the multiagent auction
1996). The four-step p-
strategy is as follows. First, the agent models the auction
process using a Markov chain (MC) with two absorbing
states (success and failure). Secondly, it computes the
sellers have a somewhat stronger incentive to be strategic
in the UMDL, as they set the clearing price and this affects their
tradeoff between the probability of trading and the profit
earned, we use the p-strategy seller (instead of buyer).
transition probabilities between the MC states. Thirdly, it
computes the probabilities and the payoffs of success and
failure. Finally, it finds the best offer price to maximize its
expected utility.
The main idea behind the p-strategy is to capture the
factors which influence the expected utility in the MC
model of the auction process. For instance, the seller is
likely to raise its offer price when there are many buyers or
when it expects more buyers to come. The MC model takes
those factors into account in the MC states and the
transition probabilities. The number of buyers and sellers at
the auction, the arrival rates of future buyers and sellers,
and the distribution of buy and sell prices are among the
identified factors.
Each state in the MC model represents the status of the
auction. The (bbss*) state, for example, represents the case
where there are 2 standing buy offers and 2 standing sell
offers and the sell offer of the p-strategy agent (represented
as s*) is higher than the other seller's offer. If we assume
that offers arrive at most one at a time, the auction can go
to any of the following states from the (bbss*) state (see
Figure
3).
arrives during the clearing interval.
. (bbs*): A buy offer arrives, and it is matched with the
lowest seller.
. (bbbss*): Because of no match, a new buy offer
becomes a standing offer.
. (bss*): A sell offer arrives, and it is matched with the
highest buyer.
. (bbss*s): Because of no match, a new sell offer
becomes the highest standing offer.
. (bbsss*): A new sell offer becomes a standing offer,
but the p-strategy agent's offer is still the highest.
bbss*
s*: my offer
bbs*
bbss*s
bbsss*
bss*
bbbss*
Figure
3: Transitions from the (bbss*) state
s2 comes
b2 comes
price b1 s2
(s1 and b2 matched at s1)
(bs) (bss) (bs)
Figure
2: Standing offers in the auction
Figure
4 depicts the MC model for the UMDL auction
with a maximum size of five buyers and sellers each 3 . In
this paper, we skip how to define the exact transition
probabilities between the MC states (step 2), how to
compute the probabilities and the payoffs of success and
failure (step 3), and how to find the best payment (step 4).
Readers may refer to (Park, Durfee & Birmingham, 1996).
Using the MC model and its transition probabilities, the
p-strategy agent is able to capture various factors that
influence the utility value and tradeoffs associated with the
factors. Figure 5 shows an example of tradeoffs between
the number of buyers and sellers. In general, the seller
raises its offer price when there are more buyers (to
increase the profit of a possible match). When the number
of sellers is five (at the right end of the graph), however,
the p-strategy seller bids a lower price when there is one
buyer than when there is no buyer. That is, the p-strategy
seller lowers its offer to increase the probability of a match
(instead of increasing the profit of a match). Offering a
higher price in this case would have served to price it out
of the auction when it might otherwise have been able to
trade profitably.
Intuitively, agents with complete models of other agents
will always do better, but without repeated encounters
complete models are unattainable. In the UMDL, an agent
in its lifetime meets many different agents, and as a result
its model of other agents is incorrect, imprecise, and
incomplete. Instead of modeling individual agents, the p-
3 The number of MC states increases with the size of the auction.
When the maximum number of standing offers is limited to m
buyers and n sellers, the number of MC states is
(m+1)((n+1)n)/2+2. Of course, one may shrink the size of the
MC model, while sacrificing the accuracy of the model.
strategy uses the model of the auction process, which is
effective, especially for very dynamic systems.
The p-strategy is an optimal strategy, provided that the
MC model represents the auction process correctly and that
all the agents have the same level of knowledge.
Obviously, the p-strategy is not optimal when its model is
incorrect (i.e., incorrect knowledge), and the p-strategy
agent can be exploited by competing agents who know
about the p-strategy (i.e., higher-level knowledge).
Advantages of p-strategy
In this section, we demonstrate the advantages of the p-
strategy in the UMDL auction, comparing the profit of the
p-strategy seller (p-QPA) with three different types of
sellers. They are:
Offer price
Number of sellers
Number of buyers
Figure
5: Tradeoffs between the number of buyers and
sellers at the auction
bs*ssss
bs*sss
bbbbbs*ss
bbbbbs*s
bs*ssss
bbbbs*sss
bbbbs*ss
bbbbs*s
bbbs*ssss
bbbs*sss
bbbs*ss
bbbs*s
bbs*ssss
bbs*sss
bbs*ss
bbs*s
bs*ssss
bs*sss
bs*ss
bs*s
s*ssss
bbbbs*
bbbs*
bbbbbs*
s*s s*ss s*sss
bss*sss
bbbbbss*ss
bbbbbss*s
bbbbss*s
ss
bbbbss*ss
bbbbss*s
bbbss*sss
bbbss*ss
bbbss*s
bbss*sss
bbss*ss
bbss*s
bss*sss
bss*ss
bss*s
bbbbss*
bbbss*
bbbbbss*
bss*
ss*
bbss*
ss*s ss*ss ss*sss
bbbbbsss*ss
bbbbbsss*s
bbbbsss*ss
bbbbss*s
bbbsss*ss
bbbsss*s
bbsss*ss
bbsss*s
bsss*ss
bsss*s
sss*ss
bbbbsss*
bbbsss*
bbbbbsss*
bsss*
sss*
bbsss*
sss*s
bbbbbssss*s
bbbbsss*s
bbbssss*s
bbssss*s
bssss*s
bbbbssss*
bbbssss*
bbbbbssss*
bssss*
ssss*
bbssss*
ssss*s
bbbbsssss*
bbbsssss*
bbbbbsssss*
bsssss*
sssss*
bbsssss*
FailureFailure
s*
bbs*
bs*
Success
Figure
4: The MC model for the UMDL auction
. A seller who bids its cost plus some fixed markup
. A seller who bids its cost plus some random markup
. A seller who bids the clearing price of the next
transaction (CP-QPA).
The experimental settings are as follows.
. Auction
The auction clears every 3 seconds. The value of the
clearing interval does not affect the results from our
experiments, provided that on average at most one offer
arrives at the auction each interval.
. Buyers
A single agent simulates multiple buyers by submitting
multiple bids. In our experiments, every 6 seconds the
buyer submits its bid with a probability of 0.8. By adjusting
the offer interval and the offer rate of the single buyer, we
can change the arrival rate of buy offers to the auction 5 .
The buyer offers valuations drawn randomly from a
uniform distribution between 10 and 30.
. Sellers
For each experiment, we compare the profits of two
sellers: p-QPA and the opponent (either FM-QPA, ZI-
QPA, or CP-QPA). Both sellers submit their bids every 24
seconds on average. In addition, similar to the buyer case, a
single agent simulates all the other sellers at the auction. Its
offer interval and offer rate are set at 12 and 0.8,
respectively.
The costs of all the sellers are based on their loads,
which are computed from the message traffic and the
current workload. That is, cost = a (number of messages
per (number of matches per minute).
The cost function represents economies of scale. Since
the workload from matches should be higher than that from
4 ZI stands for zero-intelligence. The ZI-QPA is a "budget-
constrained zero-intelligence trader" who generates random
bids subject to a no-loss constraint (Gode & Sunder, 1993).
5 Arrival-rate-of-buy-offers @ (offer-rate/offer-interval)
clearing-interval. The arrival rate varies, however, since the
agent is allowed to submit a new offer immediately after a
match even when it hasn't reached the next offer interval.
communication, we set a to 1, and b to 5 for our
experiments.
In the first set of experiments, we have compared the
profits of the p-QPA and the FM-QPA. When competing
with FM-QPAs with various markups, the p-QPA always
gets a higher profit. This is not surprising, since the p-QPA
is able to use extra information about the auction. Figure
6-(a) shows the accumulated profits of the p-QPA and the
FM-QPA who bids its cost plus 7 as its offer (i.e., fixed-
Figure
6-(b) shows the profits of the p-QPA and the ZI-
QPA. The ZI-QPA works poorly against the p-QPA, which
indicates the randomization strategy does not work. The
ZI-QPA can be thought of as an extremely naive strategy
that fails to take advantage of a given situation (Gode &
Sunder, 1993).
In the final experiment, we have compared the p-QPA
with the CP-QPA. The CP-QPA receives a price quote
from the auction-the clearing price were the auction to
clear at the time of the quote-and submits it as its offer as
long as it is higher than its cost. Note that the clearing price
is a hypothetical one, so it will change when new offer(s)
arrive during the time between the clearing-price quote and
the CP-QPA's offer.
As shown in Figure 6-(c), the p-QPA usually gets a
higher profit when competing with the CP-QPA. Since the
CP-QPA gets more matches than the p-QPA (but its profit
per match is smaller) on average, however, the CP-QPA
works better when getting more matches does not impact
its cost much (e.g., when b is 0). Bidding the next clearing
price may seem like a good heuristic, but the profit of the
CP-QPA decreases rapidly when there is another CP-QPA,
since it no longer gets as many matches as when there is a
single CP-QPA.
The p-strategy works well in the UMDL auction due to
its dynamics. No agent can have a complete, deterministic
view about the current and future status of the auction, and
naturally, an agent strategy should be able to take into
account the dynamics and the resulting uncertainties. In our
experiments, the p-strategy which models the auction
process stochastically receives higher profit than the other
FM-
QPA
ZI-
QPA
QPA
QPA
(a) FM-QPA and p-QPA (b) ZI-QPA and p-QPA (c) CP-QPA and P-QPA
Figure
Comparison of the profits of the sellers
In our previous paper (Park, Durfee, and Birmingham,
1996), we have shown the advantages of the p-strategy in
the domain of multiagent contracts with possible retraction.
In this paper, we have demonstrated that the p-strategy
works well in the new domain. Note that we have used the
p-strategy seller (compared to the p-strategy buyer in the
previous paper).
Collective Behavior of p-strategy Agents in the
UMDL Auction
Given that the p-strategy is effective in the UMDL auction
(from the previous section), nothing prohibits any self-interested
agent from adopting the p-strategy. We expect
many p-strategy agents to coexist in the UMDL, and thus
are interested in the collective behavior of such agents. In
this section, therefore, we investigate (1) how the absolute
and relative performance of a p-strategy agent changes
against other p-strategy agents, and (2) how the UMDL is
affected by multiple p-strategy agents.
Experimental Setting
Figure
7 shows six experimental settings with 7 buyers and
7 sellers. Although we fix the number of buyers and sellers
to seven each, by changing the offer rates and the offer
intervals, we can simulate a large number of agents and
different levels of activities in the auction. By increasing
the offer rates or decreasing the offer intervals, for
instance, we can simulate a more dynamic auction.
In our experiments, we deliberately set supply to be
higher than demand to emphasize competition among
sellers; each buyer submits its bid every seconds with a
probability of 0.5, while the offer interval and the offer rate
of each seller are set to 30 seconds and a probability of 0.7,
respectively.
The buyers bid their true valuations, while the sellers
bid their sell prices based on their strategies. In Session 1,
all seven sellers bid their true costs. Since traders honestly
report their reservation prices, Session 1 gets the most
matches and serves as a benchmark for comparing market
efficiency. From Session 2 through Session 6, we introduce
more p-strategy agents into the auction.
Efficiency of the system is measured in two ways. First,
we measure the efficiency of allocation, by comparing the
p-strategy agent's absolute and relative profits. Second, we
measure the efficiency of the market, using the number of
matches made and the total profit generated.
Experimental Results
Although we have shown that the p-strategy agent has an
upper hand over other-strategy agents, this observation
may not hold in the presence of other p-strategy agents. To
test this, we compare the profits of Seller 7 (p-strategy
agent) across Sessions 2 to 6. As shown in Figure 8, the
marginal profit of the p-strategy (smart) agent decreases as
the number of p-strategy (equally smart) agents increases.1 00
Figure
8: Profits of p-QPA
Now that we have established the fact that the profit of
the p-strategy agent decreases as more agents use p-
strategy, another question arises. How will a simpler
strategy agent perform in the presence of multiple p-
strategy agents? In Figure 9, by replacing Seller 1 with the
fixed-markup QPA (with markup = 5), we measure the
relative performance of the FM-QPA and the p-QPA. The
FM-QPA's profit is generally less than that of the p-QPA,
but the difference decreases with the increase of p-strategy
agents 6 . That is, the disadvantages of being less smart
decreases as the number of smart agents increases.
The result indicates that an agent may want to switch
between using p-strategy and using a simpler strategy
depending on what the other agents are doing. By
dynamically switching to a simpler strategy, an agent can
6 The FM-QPA's profit exceeds that of the p-QPA in Session 5 (in
Figure
9), but at present we cannot conclude whether this is
statistically significant.
Bidding strategy
Session Description Seller1 Seller2 Seller3 Seller4 Seller5 Seller6 Seller7
(1) All competitive sellers C C C C C C C
Figure
7: Experimental setting
achieve a similar profit (to that of using the p-strategy)
while exerting less effort (time and computation) on
computing
FM-QPA
p-QPA
Figure
9: Profits of FM-QPA and p-QPA
In terms of market efficiency, we first measure the
number of total transactions made at the auction, as shown
in
Figure
10. When the number of p-strategy agents
increases, the number of matches decreases, since the p-
strategy agents usually get fewer matches but more profit
per match.100300500SessionSessionSessionSessionSess ionSess ion
Figure
10: Efficiency measured by the number of
transactions
In addition, we measure the market efficiency using the
total profit generated from buyers and sellers (see Figure
11). The total profit eventually decreases with increasing
numbers of p-strategy agents, as the market becomes
inefficient due to strategic misrepresentation of p-strategy
agents (and therefore missed opportunities of matches).
The total profit, however, does not decrease as sharply
as one might expect due to the inherent inefficiency of the
UMDL auction mechanism (it in fact increases slightly up
to Session 4). If an auction waits for the bids from all
agents and decides on the most efficient clearing,
inefficiency due to the auction mechanism may not occur.
However, this kind of periodic, clearing-house style double
auction is unrealistic for the UMDL system where the
participants of the auction change and matches should be
made quickly.
We conjecture that having strategic sellers poses
interesting tradeoffs between strategic inefficiency and
surplus extraction. By misrepresenting their true costs, the
strategic sellers miss out on possible transactions. By
anticipating the future arrival of buyers, on the other hand,
they are able to seize more surplus.500150025003500
rs
Buye rs ' p r
Figure
Efficiency measured by the total profit
Lessons Learned
A conventional way of designing a system that exhibits
certain properties is to engineer it. Incentive engineering,
however, is unsuccessful in developing the UMDL system
because of its complexity and dynamics. Instead, by
making the p-strategy available to the agents, we have
studied the effects of strategic agents in the UMDL system.
We summarize the following observations. First,
although a self-interested agent in the UMDL system has
the capability of complex strategic reasoning, our
experiments show that such reasoning is not always
beneficial. As shown in Figure 8, the advantage of being
smart decreases with the arrival of equally smart agents.
Second, if all the other agents use the p-strategy, an
agent with a simple strategy (e.g., fixed markup) can do
just as well, while incurring less overhead to gather
information and compute bids. An agent may want to
switch between a complex strategy and a simple one
depending on the behavior of other agents. As the overhead
of complex reasoning becomes more costly, an adaptive
strategy that dynamically decides on which strategy to use
will be more desirable.
Third, we expect the UMDL is likely to evolve to a
point where some agents use simpler strategies while some
use more complex strategies that use more knowledge
(such as the p-strategy). It follows from the above
observation that if enough other agents use complex
reasoning, an agent can achieve additional profit even
when it continues using a simple strategy.
Finally, the market efficiency of the UMDL
(represented by the total profit) will not decrease as sharply
as one might expect. As shown in Figure 11, having
multiple p-strategy agents increases the market efficiency
slightly up to a certain point. Moreover, the profit-seeking
behavior of self-interested agents will keep the UMDL
agent population mixed with agents of various strategies.
Even though the market efficiency eventually decreases
with the increase in the number of p-strategy agents,
because of its mixed agent population, the UMDL will not
suffer market inefficiency of the worst case.
Conclusion
In this paper, we have used the p-strategy to examine the
collective behavior of strategic agents in the UMDL
system. In particular, we have examined market and
allocation efficiency with varying numbers of p-strategy
agents.
The findings are useful to both system designers and
agent designers. It is reassuring from the system designers'
viewpoints that the market efficiency of the UMDL does
not decrease as sharply as one might expect and that the
worst-case market inefficiency is less likely to be realized
(since even though self-interested agents have the
capability of complex strategic reasoning, not all of them
will behave so). At present, we cannot determine the exact
demographics of the agent population for the best market
efficiency, but we are continuing experiments on many
different types of agent populations to get a better
understanding of the overall system behavior.
From the allocation-efficiency perspective, on the other
hand, agent designers learn that using the p-strategy does
not always pay off and that a simple strategy is sometimes
as effective. We are currently developing an adaptive p-
strategy to dynamically determine when to use the p-
strategy and when not to. An adaptive p-strategy will be
beneficial not only to a self-interested agent but also to the
overall system efficiency.
Acknowledgments
This research has been funded in part by the joint
NSF/DARPA/NASA Digital Libraries Initiative under
CERA IRI-9411287. The first author is partially supported
by the Horace H. Rackham Barbour scholarship.
--R
On Partially Controlled Multi-Agent Systems
Flexible Social Laws.
The Dynamics of the UMDL Service Market Society.
The Double Auction Market.
Lower Bounds for Efficiency of Surplus Extraction in Double Auctions.
Rules of Encounter: Designing Conventions for Automated Negotiation among Computers: The MIT Press.
Advantages of Strategic Thinking in Multiagent Contracts
Limitations of the Vickery Auction in Computational Multiagent Systems.
Advantages of a Leveled Commitment Contracting Protocol.
Social Laws for Artificial Agent Societies: Off-line Design
The Impact of Nested Agent Models in an Information Economy.
A Market-Oriented Programming Environment and its Application to Distributed Multicommodity Flow Problems
Mechanism Design for Automated Negotiation and its Application to Task Oriented Domains.
--TR
Rules of encounter
On social laws for artificial agent societies
Mechanism design for automated negotiation, and its application to task oriented domains
Online learning about other agents in a dynamic multiagent system
Price-war dynamics in a free-market economy of software agents
Conjectural Equilibrium in Multiagent Learning
The Dynamics of the UMDL Service Market Society
Dynamics of an Information-Filtering Economy
Emergent Properties of a Market-based Digital Library with Strategic Agents
--CTR
Samuel P. M. Choi , Jiming Liu, A dynamic mechanism for time-constrained trading, Proceedings of the fifth international conference on Autonomous agents, p.568-575, May 2001, Montreal, Quebec, Canada
Sascha Ossowski , Andrea Omicini, Coordination knowledge engineering, The Knowledge Engineering Review, v.17 n.4, p.309-316, December 2002
Hiranmay Ghosh , Santanu Chaudhury, Distributed and Reactive Query Planning in R-MAGIC: An Agent-Based Multimedia Retrieval System, IEEE Transactions on Knowledge and Data Engineering, v.16 n.9, p.1082-1095, September 2004
Ricardo Buttner, A Classification Structure for Automated Negotiations, Proceedings of the 2006 IEEE/WIC/ACM international conference on Web Intelligence and Intelligent Agent Technology, p.523-530, December 18-22, 2006 | multi-agent systems;digital libraries;emergent behavior;strategic reasoning |
608654 | The Gaia Methodology for Agent-Oriented Analysis and Design. | This article presents Gaia: a methodology for agent-oriented analysis and design. The Gaia methodology is both general, in that it is applicable to a wide range of multi-agent systems, and comprehensive, in that it deals with both the macro-level (societal) and the micro-level (agent) aspects of systems. Gaia is founded on the view of a multi-agent system as a computational organisation consisting of various interacting roles. We illustrate Gaia through a case study (an agent-based business process management system). | Introduction
Progress in software engineering over the past two decades has been made through the
development of increasingly powerful and natural high-level abstractions with which to
model and develop complex systems. Procedural abstraction, abstract data types, and, most
recently, objects and components are all examples of such abstractions. It is our belief that
agents represent a similar advance in abstraction: they may be used by software developers
to more naturally understand, model, and develop an important class of complex distributed
systems.
If agents are to realise their potential as a software engineering paradigm, then it is necessary
to develop software engineering techniques that are specifically tailored to them.
Existing software development techniques (for example, object-oriented analysis and design
[2, 6]) are unsuitable for this task. There is a fundamental mismatch between the
concepts used by object-oriented developers (and indeed, by other mainstream software engineering
paradigms) and the agent-oriented view [32, 34]. In particular, extant approaches
fail to adequately capture an agent's flexible, autonomous problem-solving behaviour, the
richness of an agent's interactions, and the complexity of an agent system's organisational
structures. For these reasons, this article introduces a methodology called Gaia, which has
been specifically tailored to the analysis and design of agent-based systems 1 .
The remainder of this article is structured as follows. We begin, in the following sub-
section, by discussing the characteristics of applications for which we believe Gaia is ap-
propriate. Section 2 gives an overview of the main concepts used in Gaia. Agent-based
analysis is discussed in section 3, and design in section 4. The use of Gaia is illustrated
by means of a case study in section 5, where we show how it was applied to the design of
a real-world agent-based system for business process management [20]. Related work is
discussed in section 6, and some conclusions are presented in section 7.
Domain Characteristics
Before proceeding, it is worth commenting on the scope of our work, and in particular, on
the characteristics of domains for which we believe Gaia is appropriate. It is intended that
Gaia be appropriate for the development of systems such as ADEPT [20] and ARCHON [19].
These are large-scale real-world applications, with the following main characteristics:
ffl Agents are coarse-grained computational systems, each making use of significant computational
resources (think of each agent as having the resources of a UNIX process).
ffl It is assumed that the goal is to obtain a system that maximises some global quality
measure, but which may be sub-optimal from the point of view of the system compo-
nents. Gaia is not intended for systems that admit the possibility of true conflict 2 .
ffl Agents are heterogeneous, in that different agents may be implemented using different
programming languages, architectures, and techniques. We make no assumptions
about the delivery platform.
ffl The organisation structure of the system is static, in that inter-agent relationships do
not change at run-time.
ffl The abilities of agents and the services they provide are static, in that they do not
change at run-time.
ffl The overall system contains a comparatively small number of different agent types
(less than 100).
Gaia deals with both the macro (societal) level and the micro (agent) level aspects of de-
sign. It represents an advance over previous agent-oriented methodologies in that it is
neutral with respect to both the target domain and the agent architecture (see section 6 for
a more detailed comparison).
2. A Conceptual Framework
Gaia is intended to allow an analyst to go systematically from a statement of requirements
to a design that is sufficiently detailed that it can be implemented directly. Note that we
view the requirements capture phase as being independent of the paradigm used for analysis
and design. In applying Gaia, the analyst moves from abstract to increasingly concrete
concepts. Each successive move introduces greater implementation bias, and shrinks the
space of possible systems that could be implemented to satisfy the original requirements
statement. (See [21, pp216-222] for a discussion of implementation bias.) Analysis and
design can be thought of as a process of developing increasingly detailed models of the
system to be constructed. The main models used in Gaia are summarised in Figure 1
ANALYSIS AND DESIGN 3
requirements
roles model
services model
agent model
model
acquaintance
statement
model
interactions
design
analysis
Figure
1. Relationships between Gaia's models
Gaia borrows some terminology and notation from object-oriented analysis and design,
(specifically, FUSION [6]). However, it is not simply a naive attempt to apply such methods
to agent-oriented development. Rather, it provides an agent-specific set of concepts
through which a software engineer can understand and model a complex system. In partic-
ular, Gaia encourages a developer to think of building agent-based systems as a process of
organisational design.
The main Gaian concepts can be divided into two categories: abstract and concrete;
abstract and concrete concepts are summarised in Table 1. Abstract entities are those used
during analysis to conceptualise the system, but which do not necessarily have any direct
realisation within the system. Concrete entities, in contrast, are used within the design
process, and will typically have direct counterparts in the run-time system.
3. Analysis
The objective of the analysis stage is to develop an understanding of the system and its
structure (without reference to any implementation detail). In our case, this understanding
is captured in the system's organisation. We view an organisation as a collection of
roles, that stand in certain relationships to one another, and that take part in systematic,
institutionalised patterns of interactions with other roles - see Figure 2.
The most abstract entity in our concept hierarchy is the system. Although the term "sys-
tem" is used in its standard sense, it also has a related meaning when talking about an
Table
1. Abstract and concrete concepts in Gaia
Abstract
Concepts Concrete Concepts
Roles Agent Types
Permissions Services
Responsibilities Acquaintances
Protocols
Activities
Liveness properties
Safety properties
agent-based system, to mean "society" or "organisation". That is, we think of an agent-based
system as an artificial society or organisation.
The idea of a system as a society is useful when thinking about the next level in the concept
hierarchy: roles. It may seem strange to think of a computer system as being defined
by a set of roles, but the idea is quite natural when adopting an organisational view of the
world. Consider a human organisation such as a typical company. The company has roles
such as "president", "vice president", and so on. Note that in a concrete realisation of a
company, these roles will be instantiated with actual individuals: there will be an individual
who takes on the role of president, an individual who takes on the role of vice president,
and so on. However, the instantiation is not necessarily static. Throughout the company's
lifetime, many individuals may take on the role of company president, for example. Also,
there is not necessarily a one-to-one mapping between roles and individuals. It is not unusual
(particularly in small or informally defined organisations) for one individual to take
on many roles. For example, a single individual might take on the role of "tea maker",
"mail fetcher", and so on. Conversely, there may be many individuals that take on a single
role, e.g., "salesman" 3 .
A role is defined by four attributes: responsibilities, permissions, activities, and proto-
cols. Responsibilities determine functionality and, as such, are perhaps the key attribute
associated with a role. An example responsibility associated with the role of company
president might be calling the shareholders meeting every year. Responsibilities are divided
into two types: liveness properties and safety properties [27] 4 . Liveness properties
intuitively state that "something good happens". They describe those states of affairs that
an agent must bring about, given certain environmental conditions. In contrast, safety properties
are invariants. Intuitively, a safety property states that "nothing bad happens" (i.e.,
that an acceptable state of affairs is maintained across all states of execution). An example
might be "ensure the reactor temperature always remains in the range 0-100".
In order to realise responsibilities, a role has a set of permissions. Permissions are the
"rights" associated with a role. The permissions of a role thus identify the resources that
are available to that role in order to realise its responsibilities. In the kinds of system that
we have typically modelled, permissions tend to be information resources. For example,
a role might have associated with it the ability to read a particular item of information,
ANALYSIS AND DESIGN 5
properties
safety
system
responsibilities
roles interactions
permissions
liveness
properties
Figure
2. Analysis Concepts
or to modify another piece of information. A role can also have the ability to generate
information.
The activities of a role are computations associated with the role that may be carried out
by the agent without interacting with other agents. Activities are thus "private" actions, in
the sense of [28].
Finally, a role is also identified with a number of protocols, which define the way that it
can interact with other roles. For example, a "seller" role might have the protocols "Dutch
auction" and "English auction" associated with it; the Contract Net Protocol is associated
with the roles "manager" and "contractor" [30].
Thus, the organisation model in Gaia is comprised of two further models: the roles model
(section 3.1) and the interaction model (section 3.2).
3.1. The Roles Model
The roles model identifies the key roles in the system. Here a role can be viewed as an
abstract description of an entity's expected function. In other terms, a role is more or less
identical to the notion of an office in the sense that "prime minister", "attorney general
of the United States", or "secretary of state for Education" are all offices. Such roles (or
offices) are characterised by two types of attribute
ffl The permissions/rights associated with the role.
6 WOOLDRIDGE, JENNINGS, AND KINNY
A role will have associated with it certain permissions, relating to the type and the
amount of resources that can be exploited when carrying out the role. In our case,
these aspects are captured in an attribute known as the role's permissions.
ffl The responsibilities of the role.
A role is created in order to do something. That is, a role has a certain functionality.
This functionality is represented by an attribute known as the role's responsibilities.
Permissions The permissions associated with a role have two aspects:
ffl they identify the resources that can legitimately be used to carry out the role - intu-
itively, they say what can be spent while carrying out the role;
ffl they state the resource limits within which the role executor must operate - intuitively,
they say what can't be spent while carrying out the role.
In general, permissions can relate to any kind of resource. In a human organisation, for
example, a role might be given a monetary budget, a certain amount of person effort, and
so on. However, in Gaia, we think of resources as relating only to the information or
knowledge the agent has. That is, in order to carry out a role, an agent will typically be
able to access certain information. Some roles might generate information; others may
need to access a piece of information but not modify it, while yet others may need to
modify the information. We recognise that a richer model of resources is required for the
future, although for the moment, we restrict our attention simply to information.
Gaia makes use of a formal notation for expressing permissions that is based on the
FUSION notation for operation schemata [6, pp26-31]. To introduce our concepts we will
use the example of a COFFEEFILLER role (the purpose of this role is to ensure that a coffee
pot is kept full of coffee for a group of workers). The following is a simple illustration of
the permissions associated with the role COFFEEFILLER:
reads coffeeStatus // full or empty
changes coffeeStock // stock level of coffee
This specification defines two permissions for COFFEEFILLER: it says that the agent carrying
out the role has permission to access the value coffeeStatus, and has permission to both
read and modify the value coffeeStock. There is also a third type of permission, generates,
which indicates that the role is the producer of a resource (not shown in the example).
Note that these permissions relate to knowledge that the agent has. That is, coffeeStatus is
a representation on the part of the agent of some value in the real world.
Some roles are parameterised by certain values. For example, we can generalise the
COFFEEFILLER role by parameterising it with the coffee machine that is to be kept refilled.
This is specified in a permissions definition by the supplied keyword, as follows:
reads supplied coffeeMaker // name of coffee maker
coffeeStatus // full or empty
changes coffeeStock // stock level of coffee
ANALYSIS AND DESIGN 7
Table
2. Operators for liveness expression
Operator Interpretation
x:y x followed by y
occurs
x x occurs 0 or more times
x+ x occurs 1 or more times
x w x occurs infinitely often
[x] x is optional
Responsibilities The functionality of a role is defined by its responsibilities. These responsibilities
can be divided into two categories: liveness and safety responsibilities.
Liveness responsibilities are those that, intuitively, state that "something good happens".
Liveness responsibilities are so called because they tend to say that "something will be
done", and hence that the agent carrying out the role is still alive. Liveness responsibilities
tend to follow certain patterns. For example, the guaranteed response type of achievement
goal has the form "a request is always followed by a response". The infinite repetition
achievement goal has the form "x will happen infinitely often". Note that these types of
requirements have been widely studied in the software engineering literature, where they
have proven to be necessary for capturing properties of reactive systems [27].
In order to illustrate the various concepts associated with roles, we will continue with
our running example of the COFFEEFILLER role. Examples of liveness responsibilities for
the COFFEEFILLER role might be:
ffl whenever the coffee pot is empty, fill it up;
ffl whenever fresh coffee is brewed, make sure the workers know about it.
In Gaia, liveness properties are specified via a liveness expression, which defines the "life-
cycle" of the role. Liveness expressions are similar to the life-cycle expression of FUSION
[6], which are in turn essentially regular expressions. Our liveness expressions have
an additional operator, "w", for infinite repetition (see Table 2 for more details). They
thus resemble w-regular expressions, which are known to be suitable for representing the
properties of infinite computations [32].
Liveness expressions define the potential execution trajectories through the various activities
and interactions (i.e., over the protocols) associated with the role. The general form
of a liveness expression is:
where ROLENAME is the name of the role whose liveness properties are being defined,
and expression is the liveness expression defining the liveness properties of ROLENAME.
The atomic components of a liveness expression are either activities or protocols. An
activity is somewhat like a method in object-oriented terms, or a procedure in a PASCAL-like
language. It corresponds to a unit of action that the agent may perform, which does
not involve interaction with any other agent. Protocols, on the other hand, are activities
that do require interaction with other agents. To give the reader some visual clues, we
protocol names in a sans serif font (as in xxx), and use a similar font, underlined, for
activity names (as in yyy).
To illustrate liveness expressions, consider again the above-mentioned responsibilities of
the COFFEEFILLER role:
InformWorkers: CheckStock: AwaitEmpty) w
This expression says that COFFEEFILLER consists of executing the protocol Fill, followed
by the protocol InformWorkers, followed by the activity CheckStock and the protocol AwaitEmpty.
The sequential execution of these protocols and activities is then repeated infinitely often.
For the moment, we shall treat the protocols simply as labels for interactions and shall not
worry about how they are actually defined (this matter will be discussed in section 3.2).
Complex liveness expressions can be made easier to read by structuring them. A simple
example illustrates how this is done:
InformWorkers: CheckStock: AwaitEmpty
The semantics of such definitions are straightforward textual substitution.
In many cases, it is insufficient simply to specify the liveness responsibilities of a role.
This is because an agent, carrying out a role, will be required to maintain certain invariants
while executing. For example, we might require that a particular agent taking part in
an electronic commerce application never spends more money than it has been allocated.
These invariants are called safety conditions, because they usually relate to the absence of
some undesirable condition arising.
Safety requirements in Gaia are specified by means of a list of predicates. These predicates
are typically expressed over the variables listed in a role's permissions attribute.
Returning to our COFFEEFILLER role, an agent carrying out this role will generally be required
to ensure that the coffee stock is never empty. We can do this by means of the
following safety expression:
By convention, we simply list safety expressions as a bulleted list, each item in the list
expressing an individual safety responsibility. It is implicitly assumed that these responsibilities
apply across all states of the system execution. If the role is of infinitely long
duration (as in the COFFEEFILLER example), then the invariants must always be true.
It is now possible to precisely define the Gaia roles model. A roles model is comprised
of a set of role schemata, one for each role in the system. A role schema draws together the
ANALYSIS AND DESIGN 9
Role Schema: name of role
Description short English description of the role
Protocols and Activities protocols and activities in which the role plays a part
Permissions "rights" associated with the role
Responsibilities
Liveness liveness responsibilities
Safety safety responsibilities
Figure
3. Template for Role Schemata
Role Schema: COFFEEFILLER
Description:
This role involves ensuring that the coffee pot is kept filled, and informing the workers when fresh
coffee has been brewed.
Protocols and Activities:
Fill, InformWorkers, CheckStock, AwaitEmpty
Permissions:
reads supplied coffeeMaker // name of coffee maker
coffeeStatus // full or empty
changes coffeeStock // stock level of coffee
Responsibilities
Liveness:
InformWorkers: CheckStock: AwaitEmpty) w
Safety:
Figure
4. Schema for role COFFEEFILLER
various attributes discussed above into a single place (Figure 3). An exemplar instantiation
is given for the COFFEEFILLER role in Figure 4. This schema indicates that COFFEEFILLER
has permission to read the coffeeMaker parameter (that indicates which coffee machine the
role is intended to keep filled), and the coffeeStatus (that indicates whether the machine is
full or empty). In addition, the role has permission to change the value coffeeStock.
3.2. The Interaction Model
There are inevitably dependencies and relationships between the various roles in a multi-agent
organisation. Indeed, such interplay is central to the way in which the system func-
tions. Given this fact, interactions obviously need to be captured and represented in the
CoffeeFiller
Fill
CoffeeMachine
Fill coffee machine
supplied coffeeMaker
coffeeStock
Figure
5. The Fill Protocol Definition
analysis phase. In Gaia, such links between roles are represented in the interaction model.
This model consists of a set of protocol definitions, one for each type of inter-role interac-
tion. Here a protocol can be viewed as an institutionalised pattern of interaction. That is,
a pattern of interaction that has been formally defined and abstracted away from any particular
sequence of execution steps. Viewing interactions in this way means that attention
is focused on the essential nature and purpose of the interaction, rather than on the precise
ordering of particular message exchanges (cf. the interaction diagrams of OBJECTORY [6,
pp198-203] or the scenarios of FUSION [6]).
This approach means that a single protocol definition will typically give rise to a number
of message interchanges in the run time system. For example, consider an English auction
protocol. This involves multiple roles (sellers and bidders) and many potential patterns
of interchange (specific price announcements and corresponding bids). However at the
analysis stage, such precise instantiation details are unnecessary, and too premature.
A protocol definition consists of the following attributes:
ffl purpose: brief textual description of the nature of the interaction (e.g., "information
"schedule activity" and "assign task");
ffl initiator: the role(s) responsible for starting the interaction;
ffl responder: the role(s) with which the initiator interacts;
ffl inputs: information used by the role initiator while enacting the protocol;
ffl outputs: information supplied by/to the protocol responder during the course of the
ffl processing: brief textual description of any processing the protocol initiator performs
during the course of the interaction.
As an illustration, consider the Fill protocol, which forms part of the COFFEEFILLER role
Figure
5). This states that the protocol Fill is initiated by the role COFFEEFILLER and
involves the role COFFEEMACHINE. The protocol involves COFFEEFILLER putting coffee
in the machine named coffeeMaker, and results in COFFEEMACHINE being informed about
the value of coffeeStock. We will see further examples of protocols in section 5.
ANALYSIS AND DESIGN 11
3.3. The Analysis Process
The analysis stage of Gaia can now be summarised:
1. Identify the roles in the system. Roles in a system will typically correspond to:
ffl individuals, either within an organisation or acting independently;
ffl departments within an organisation; or
organisations themselves.
Output: A prototypical roles model - a list of the key roles that occur in the system,
each with an informal, unelaborated description.
2. For each role, identify and document the associated protocols. Protocols are the patterns
of interaction that occur in the system between the various roles. For example, a
protocol may correspond to an agent in the role of BUYER submitting a bid to another
agent in the role of SELLER.
Output: An interaction model, which captures the recurring patterns of inter-role inter-action
3. Using the protocol model as a basis, elaborate the roles model.
Output: A fully elaborated roles model, which documents the key roles occurring in the
system, their permissions and responsibilities, together with the protocols and activities
in which they participate.
4. Iterate stages (1)-(3).
4. Design
The aim of a "classical" design process is to transform the abstract models derived during
the analysis stage into models at a sufficiently low level of abstraction that they can be
easily implemented. This is not the case with agent-oriented design, however. Rather, the
aim in Gaia is to transform the analysis models into a sufficiently low level of abstraction
that traditional design techniques (including object-oriented techniques) may be applied in
order to implement agents. To put it another way, Gaia is concerned with how a society of
agents cooperate to realise the system-level goals, and what is required of each individual
agent in order to do this. Actually how an agent realises its services is beyond the scope of
Gaia, and will depend on the particular application domain.
The Gaia design process involves generating three models (see Figure 1). The agent
model identifies the agent types that will make up the system, and the agent instances that
will be instantiated from these types. The services model identifies the main services that
are required to realise the agent's role. Finally, the acquaintance model documents the
lines of communication between the different agents.
Table
3. Instance Qualifiers
Qualifier Meaning
there will be exactly n instances
m::n there will be between m and n instances
there will be 0 or more instances
there will be 1 or more instances
4.1. The Agent Model
The purpose of the Gaia agent model is to document the various agent types that will be
used in the system under development, and the agent instances that will realise these agent
types at run-time.
An agent type is best thought of as a set of agent roles. There may in fact be a one-to-one
correspondence between roles (as identified in the roles model - see section 3.1) and agent
types. However, this need not be the case. A designer can choose to package a number
of closely related roles in the same agent type for the purposes of convenience. Efficiency
will also be a major concern at this stage: a designer will almost certainly want to optimise
the design, and one way of doing this is to aggregate a number of agent roles into a single
type. An example of where such a decision may be necessary is where the "footprint" of
an agent (i.e., its run-time requirements in terms of processor power or memory space)
is so large that it is more efficient to deliver a number of roles in a single agent than to
deliver a number of agents each performing a single role. There is obviously a trade-off
between the coherence of an agent type (how easily its functionality can be understood)
and the efficiency considerations that come into play when designing agent types. The
agent model is defined using a simple agent type tree, in which leaf nodes correspond to
roles, (as defined in the roles model), and other nodes correspond to agent types. If an
agent type t 1 has children t 2 and t 3 , then this means that t 1 is composed of the roles that
make up t 2 and t 3 .
We document the agent instances that will appear in a system by annotating agent types
in the agent model (cf. the qualifiers from FUSION [6]). An annotation n means that there
will be exactly n agents of this type in the run-time system. An annotation m::n means that
there will be no less than m and no more than n instances of this type in a run-time system
n). An annotation means that there will be zero or more instances at run-time, and
means that there will be one or more instances at run-time (see Table 3).
Note that inheritance plays no part in Gaia agent models. Our view is that agents are
coarse grained computational systems, and an agent system will typically contain only a
comparatively small number of roles and types, with often a one-to-one mapping between
them. For this reason, we believe that inheritance has no useful part to play in the design of
agent types. (Of course, when it comes to actually implementing agents, inheritance may
be used to great effect, in the normal object-oriented fashion.)
ANALYSIS AND DESIGN 13
4.2. The Services Model
As its name suggests, the aim of the Gaia services model is to identify the services associated
with each agent role, and to specify the main properties of these services. By a
service, we mean a function of the agent. In OO terms, a service would correspond to a
method; however, we do not mean that services are available for other agents in the same
way that an object's methods are available for another object to invoke. Rather, a service is
simply a single, coherent block of activity in which an agent will engage. It should be clear
there every activity identified at the analysis stage will correspond to a service, though not
every service will correspond to an activity.
For each service that may be performed by an agent, it is necessary to document its
properties. Specifically, we must identify the inputs, outputs, pre-conditions, and post-conditions
of each service. Inputs and outputs to services will be derived in an obvious
way from the protocols model. Pre- and post-conditions represent constraints on services.
These are derived from the safety properties of a role. Note that by definition, each role
will be associated with at least one service.
The services that an agent will perform are derived from the list of protocols, activ-
ities, responsibilities and the liveness properties of a role. For example, returning to
the coffee example, there are four activities and protocols associated with this role: Fill,
InformWorkers, CheckStock, and AwaitEmpty. In general, there will be at least one service
associated with each protocol. In the case of CheckStock, for example, the service (which
may have the same name), will take as input the stock level and some threshold value, and
will simply compare the two. The pre- and post-conditions will both state that the coffee
stock level is greater than 0. This is one of the safety properties of the role COFFEEFILLER.
The Gaia services model does not prescribe an implementation for the services it doc-
uments. The developer is free to realise the services in any implementation framework
deemed appropriate. For example, it may be decided to implement services directly as
methods in an object-oriented language. Alternatively, a service may be decomposed into
a number of methods.
4.3. The Acquaintance Model
The final Gaia design model is probably the simplest: the acquaintance model. Acquaintance
models simply define the communication links that exist between agent types. They
do not define what messages are sent or when messages are sent - they simply indicate
that communication pathways exist. In particular, the purpose of an acquaintance model
is to identify any potential communication bottlenecks, which may cause problems at run-time
(see section 5 for an example). It is good practice to ensure that systems are loosely
coupled, and the acquaintance model can help in doing this. On the basis of the acquaintance
model, it may be found necessary to revisit the analysis stage and rework the system
design to remove such problems.
An agent acquaintance model is simply a graph, with nodes in the graph corresponding
to agent types and arcs in the graph corresponding to communication pathways. Agent
acquaintance models are directed graphs, and so an arc a ! b indicates that a will send
14 WOOLDRIDGE, JENNINGS, AND KINNY
messages to b, but not necessarily that b will send messages to a. An acquaintance model
may be derived in a straightforward way from the roles, protocols, and agent models.
4.4. The Design Process
The Gaia design stage can now be summarised:
1. Create an agent model:
ffl aggregate roles into agent types, and refine to form an agent type hierarchy;
ffl document the instances of each agent type using instance annotations.
2. Develop a services model, by examining activities, protocols, and safety and liveness
properties of roles.
3. Develop an acquaintance model from the interaction model and agent model.
5. A Case Study: Agent-Based Business Process Management
This section briefly illustrates how Gaia can be applied, through a case study of the analysis
and design of an agent-based system for managing a British Telecom business process
(see [20] for more details). For reasons of brevity, we omit some details, and aim instead
to give a general flavour of the analysis and design.
The particular application is providing customers with a quote for installing a network
to deliver a particular type of telecommunications service. This activity involves the following
departments: the customer service division (CSD), the design division (DD), the
legal division (LD) and the various organisations who provide the out-sourced service of
vetting customers (VCs). The process is initiated by a customer contacting the CSD with a
set of requirements. In parallel to capturing the requirements, the CSD gets the customer
vetted. If the customer fails the vetting procedure, the quote process terminates. Assuming
the customer is satisfactory, their requirements are mapped against the service portfolio. If
they can be met by a standard off-the-shelf item then an immediate quote can be offered.
In the case of bespoke services, however, the process is more complex. DD starts to design
a solution to satisfy the customer's requirements and whilst this is occurring LD checks
the legality of the proposed service. If the desired service is illegal, the quote process ter-
minates. Assuming the requested service is legal, the design will eventually be completed
and costed. DD then informs CSD of the quote. CSD, in turn, informs the customer. The
business process then terminates.
Moving from this process-oriented description of the system's operation to an organisational
view is comparatively straightforward. In many cases there is a one to one mapping
between departments and roles. CSD's behaviour falls into two distinct roles: one
acting as an interface to the customer ( CUSTOMERHANDLER, Figure 6), and one overseeing
the process inside the organisation ( QUOTEMANAGER, Figure 7). Thus, the VC's, the
LD's, and the DD's behaviour are covered by the roles CUSTOMERVETTER (Figure 8),
respectively. The final
role is that of the CUSTOMER (Figure 11) who requires the quote.
ANALYSIS AND DESIGN 15
Role Schema: CUSTOMERHANDLER (CH)
Description:
Receives quote request from the customer and oversees process to ensure appropriate quote is returned.
Protocols and Activities:
AwaitCall, ProduceQuote, InformCustomer
Permissions:
reads supplied customerDetails // customer contact information
supplied customerRequirements // what customer wants
quote // completed quote or nil
Responsibilities
Liveness:
Safety:
ffl true
Figure
6. Schema for role CUSTOMERHANDLER
With the respective role definitions in place, the next stage is to define the associated
interaction models for these roles. Here we focus on the interactions associated with the
QUOTEMANAGER role. This role interacts with the CUSTOMER role to obtain the customer's
requirements ( GetCustomerRequirements protocol, Figure 12c) and with the CUSTOMERVETTER
role to determine whether the customer is satisfactory ( VetCustomer protocol, Figure 12a).
If the customer proves unsatisfactory, these are the only two protocols that are enacted.
If the customer is satisfactory then their request is costed. This costing involves enacting
activity CostStandardService for frequently requested services or the CheckServiceLegality
Figure
12b) and CostBespokeService (Figure 12d) protocols for non-standard requests.
Having completed our analysis of the application, we now turn to the design phase.
The first model to be generated is the agent model (Figure 13). This shows, for most
cases, a one-to-one correspondence between roles and agent types. The exception is for
the CUSTOMERHANDLER and QUOTEMANAGER roles which, because of their high degree
of interdependence are grouped into a single agent type.
The second model is the services model. Again because of space limitations we concentrate
on the QUOTEMANAGER role and the Customer Service Division Agent. Based
on the QUOTEMANAGER role, seven distinct services can be identified (Table 3). From the
GetCustomerRequirements protocol, we derive the service "obtain customer requirements".
This service handles the interaction from the perspective of the quote manager. It takes the
customerDetails as input and returns the customerRequirements as output (Figure 12c).
There are no associated pre- or post-conditions.
The service associated with the VetCustomer protocol is "vet customer". Its inputs, derived
from the protocol definition (Figure 12a), are the customerDetails and its outputs are
creditRating. This service has a pre-condition that an appropriate customer vetter must be
Role Schema: QUOTEMANAGER (QM)
Description:
Responsible for enacting the quote process. Generates a quote or returns no quote (nil) if customer is
inappropriate or service is illegal.
Protocols and Activities:
VetCustomer, GetcustomerRequirements, CostStandardService, CheckServiceLegality,
CostBespokeService
Permissions:
reads supplied customerDetails // customer contact information
supplied customerRequirements // detailed service requirements
creditRating // customer's credit rating
serviceIsLegal // boolean for bespoke requests
generates quote // completed quote or nil
Responsibilities
Liveness:
CostService
CheckServiceLegality k CostBespokeService)
Safety:
Figure
7. Schema for role QUOTEMANAGER
available (derived from the TenderContract interaction on the VetCustomer protocol) and a
post condition that the value of creditRating is non-null (because this forms part of a safety
condition of the QUOTEMANAGER role).
The third service involves checking whether the customer is satisfactory (the creditRating
safety condition of QUOTEMANAGER). If the customer is unsatisfactory then only the first
branch of the QuoteRespose liveness condition (Figure 7) gets executed. If the customer is
satisfactory, the CostService liveness route is executed.
The next service makes the decision of which path of the CostService liveness expression
gets executed. Either the service is of a standard type (execute the service "produce
standard costing") or it is a bespoke service in which case the CheckServiceLegality and
CostBespokeService protocols are enacted. In the latter case, the protocols are associated
with the service "produce bespoke costing". This service produces a non-nil value for
quote as long as the serviceIsLegal safety condition (Figure 7) is not violated.
The final service involves informing the customer of the quote. This, in turn, completes
the CUSTOMERHANDLER role.
ANALYSIS AND DESIGN 17
Service
Inputs
Outputs
Pre-condition
Post-condition
obtain
customer
re-
quirements
customerDetails
customerRequirements
true
true
vet
customer
customerDetails
creditRating
customer
vetter
available
creditRating
nil
check
customer
creditRating
continuationDecision
continuationDecision
nil
continuationDecision
nil
check
service
type
customerRequirements
serviceType
creditRating
bad
serviceTypefstandard;bespokeg
produce
standard
ser-
vice
costing
serviceType, customerRequirements
quote
serviceType=
standard-
quote=
nil
quote
nil
produce
bespoke
ser-
vice
costing
serviceType, customerRequirements
quote,
serviceIsLegal
serviceType
bespoke
quote
serviceIsLegal
(quote
(quote
nil-:serviceIsLegal)
inform
customer
customerDetails,
quote
true
customers
know
quote
Table
3.
The
services
model
Role Schema: CUSTOMERVETTER (CV)
Description:
Checks credit rating of supplied customer.
Protocols and Activities:
VettingRequest, VettingResponse
Permissions:
reads supplied customerDetails // customer contact information
customerRatingInformation // credit rating information
generates creditRating // credit rating of customer
Responsibilities
Liveness:
Safety:
Figure
8. Schema for role CUSTOMERVETTER
Role Schema: LEGALADVISOR (LA)
Description:
Determines whether given bespoke service request is legal or not.
Protocols and Activities:
LegalCheckRequest, LegalCheckResponse
Permissions:
reads supplied customerRequirements // details of proposed service
generates serviceIsLegal // true or false
Responsibilities
Liveness:
Safety:
ffl true
Figure
9. Schema for role LEGALADVISOR
The final model is the acquaintance model, which shows the communication pathways that exist
between agents (Figure 14).
ANALYSIS AND DESIGN 19
Role Schema: NETWORKDESIGNER (ND)
Description:
Design and cost network to meet bespoke service request requirements.
Protocols and Activities:
CostingRequest, ProduceDesign, ReturnCosting
Permissions:
reads supplied customerRequirements // details of proposed service
serviceIsLegal // boolean
generates quote // cost of realising service
Responsibilities
Liveness:
Safety:
Figure
10. Schema for role NETWORKDESIGNER
Role Schema: CUSTOMER (CUST)
Description:
Organisation or individual requiring a service quote.
Protocols and Activities:
MakeCall, GiveRequirements
Permissions:
generates customerDetails // Owner of customer information
customerRequirements // Owner of customer requirements
Responsibilities
Liveness:
Safety:
ffl true
Figure
11. Schema for role CUSTOMER
6. Related Work
In recent times there has been a surge of interest in agent-oriented modelling techniques and method-
ologies. The various approaches may be roughly grouped as follows:
QM
CostingRequest
ask for costing
ReturnCosting
CH, QM
design network and
cost solution
customerRequirements
customerRequirements
quote
TenderContract
QM
select which CV to
award contract to
vettingRequirements
VettingRequest
QM CV
customer
ask for vetting of
customerDetails
VettingResponse
perform vetting and
return credit rating
customerDetails
customerRatingInfo
creditRating
LegalCheckRequest
LA
service's legality
ask for check of
customerRequirements
customerRequirements
LegalCheckResponse
LA QM, ND
check service legality
serviceIsLegal
(a) (b)
(c) (d)
QM
RequirementsRequest
CUST
requirements
details of customer's
customerDetails
CUST QM
details
provide service
customerRequirements
GiveRequirements
Figure
12. Definition of protocols associated with the QUOTEMANAGER role: (a) VetCustomer,
(b) CheckServiceLegality, (c) GetCustomerRequirements, and (d) CostBespokeService.
CustomerAgent
Customer CustomerHandler QuoteManager
CustomerServiceDivisionAgentCustomerVetter
VetCustomerAgentNetworkDesignerAgent
NetworkDesignerLegalAdvisorAgent
Figure
13. The agent model
ffl those such as [4, 24] which take existing OO modelling techniques or methodologies as their
basis, seeking either to extend and adapt the models and define a methodology for their use, or
to directly extend the applicability of OO methodologies and techniques, such as design patterns,
to the design of agent systems,
ANALYSIS AND DESIGN 21
CustomerServiceDivisionAgent
CustomerAgent
NetworkDesignAgent LegalAdvisorAgent
VetCustomerAgent
Figure
14. The acquaintance model
ffl those such as [3, 17] which build upon and extend methodologies and modelling techniques from
knowledge engineering, providing formal, compositional modelling languages suitable for the
verification of system structure and function,
ffl those which take existing formal methods and languages, for example Z [31], and provide definitions
within such a framework that support the specification of agents or agent systems [26],
and
ffl those which have essentially been developed de novo for particular kinds of agent systems.
CASSIOPEIA [7], for example, supports the design of Contract Net [29] based systems and has
been applied to Robot Soccer.
These design methodologies may also be divided into those that are essentially top-down approaches
based on progressive decomposition of behaviour, usually building (as in Gaia) on some notion
of role, and those such as CASSIOPEIA that are bottom-up approaches which begin by identifying
elementary agent behaviours. A very useful survey which classifies and reviews these and other
methodologies has also appeared [16].
The definition and use of various notions of role, responsibility, interaction, team and society or
organization in particular methods for agent-oriented analysis and design has inherited or adapted
much from more general uses of these concepts within multi-agent systems, including organization-
focussed approaches such as [14, 9, 18] and sociological approaches such as [5]. However, it is
beyond the scope of this article to compare the Gaia definition and use of these concepts with this
heritage.
Instead, we will focus here on the relationship between Gaia and other approaches based that build
upon OO techniques, in particular the kgr approach [24, 23]. But it is perhaps useful to begin by
summarizing why OO modelling techniques and design methodologies themselves are not directly
applicable to multi-agent system design.
6.1. Shortcomings of Object Oriented techniques
The first problem concerns the modelling of individual agents or agent classes. While there are superficial
similarities between agents and objects, representing an agent as an object, i.e., as a set of
attributes and methods, is not very useful because the representation is too fine-grained, operating
at an inappropriate level of abstraction. An agent so represented may appear quite strange, perhaps
exhibiting only one public method whose function is to receive messages from other agents. Thus an
object model does not capture much useful information about an agent, and powerful OO concepts
such as inheritance and aggregation become quite useless as a result of the poverty of the representation
There are several reasons for this problem. One is that the agent paradigm is based on a significantly
stronger notion of encapsulation than the object paradigm. An agent's internal state is
22 WOOLDRIDGE, JENNINGS, AND KINNY
usually quite opaque and, in some systems, the behaviours that an agent will perform upon request
are not even made known until it advertises them within an active system. Related to this is the key
characteristic of autonomy: agents cannot normally be created and destroyed in the liberal manner
allowed within object systems and they have more freedom to determine how they may respond to
messages, including, for example, by choosing to negotiate some agreement about how a task will be
performed. As the underlying communication model is usually asynchronous there is no predefined
notion of flow of control from one agent to another: an agent may autonomously initiate internal or
external behaviour at any time, not just when it is sent a message. Finally, an agent's internal state,
including its knowledge, may need to be represented in a manner that cannot easily be translated into
a set of attributes; in any case to do so would constitute a premature implementation bias.
The second problem concerns the power of object models to adequately capture the relationships
that hold between agents in a multi-agent system. While the secondary models in common use in OO
methodologies such as use cases and interaction diagrams may usefully be adapted (with somewhat
different semantics), the Object Model, which constitutes the primary specification of an OO system,
captures associations between object classes that model largely static dependencies and paths of
accessibility which are largely irrelevant in a multi-agent system. Only the instantiation relationship
between classes and instances can be directly adopted. Important aspects of relationships between
agents such as their repertoire of interactions and their degree of control or influence upon each
other are not easily captured. The essential problem here is the uniformity and static nature of the
OO object model. An adequate agent model needs to capture these relationships between agents,
their dynamic nature, and perhaps also relationships between agents and non-agent elements of the
system, including passive or abstract ones such as those modelled here as resources.
Both of these are problems concerning the suitability of OO modelling techniques for modelling
a multi-agent system. Another issue is the applicability of OO methodologies to the process of
analyzing and designing a multi-agent system. OO methodologies typically consist of an iterative
refinement cycle of identifying classes, specifying their semantics and relationships, and elaborating
their interfaces and implementation. At this level of abstraction, they appear similar to typical AO
methodologies, which usually proceed by identifying roles and their responsibilities and goals, developing
an organizational structure, and elaborating the knowledge and behaviours associated with
a role or agent.
However, this similarity disappears at the level of detail required by the models, as the key abstractions
involved are quite different. For example, the first step of object class identification typically
considers tangible things, roles, organizations, events and even interactions as candidate objects,
whereas these need to be clearly distinguished and treated differently in an agent-oriented approach.
The uniformity and concreteness of the object model is the basis of the problem; OO methodologies
provide guidance or inspiration rather than a directly useful approach to analysis and design.
6.2. Comparison with the KGR approach
The KGR approach [24, 23] was developed to fulfill the need for a principled approach to the
specification of complex multi-agent systems based on the belief-desire-intention (BDI) technology
of the Procedural Reasoning System (PRS) and the Distributed Multi-Agent Reasoning System
(DMARS) [25, 8]. A key motivation of the work was to provided useful, familiar mechanisms for
structuring and managing the complexity of such systems.
The first and most obvious difference between the approach proposed here and KGR is one of
scope. Our methodology does not attempt to unify the analysis and abstract design of a multi-agent
system with its concrete design and implementation with a particular agent technology, regarding the
output of the analysis and design process as an abstract specification to which traditional lower-level
design methodologies may be applied. KGR, by contrast, makes a strong architectural commitment
ANALYSIS AND DESIGN 23
to BDI architectures and proposes a design elaboration and refinement process that leads to directly
executable agent specifications. Given the proliferation of available agent technologies, there are
clearly advantages to a more general approach, as proposed here. However, the downside is that it
cannot provide a set of models, abstractions and terminology that may be used uniformly throughout
the system life cycle. Furthermore, there may be a need for iteration of the AO analysis and design
process if the lower-level design process reveals issues that are best resolved at the AO level. A re-search
problem for our approach and others like it is whether and how the adequacy and completeness
of its outputs can be assessed independently of any traditional design process that follows.
A second difference is that in this work a clear distinction is made between the analysis phase, in
which the roles and interaction models are fully elaborated, and the design phase, in which agent,
services and acquaintance models are developed. The KGR approach does not make such a distinc-
tion, proposing instead the progressive elaboration and refinement of agent and interaction models
which capture respectively roles, agents and services, and interactions and acquaintances. While
both methodologies begin with the identification of roles and their properties, here we have chosen
to model separately abstract agents (roles), concrete agents and the services they provide. KGR, on
the other hand, employs a more uniform agent model which admits both abstract agents and concrete
agent classes and instances and allows them to be organized within an inheritance hierarchy, thus
allowing multiple levels of abstraction and the deferment of identification of concrete agent classes
until late in the design process.
While both approaches employ responsibilities as an abstraction used to decompose the structure
of a role, they differ significantly as to how these are represented and developed. Here responsibilities
consist of safety and liveness properties built up from already identified interactions and activities.
By contrast, KGR treats responsibilities as abstract goals, triggered by events or interactions, and
adopts a strictly top-down approach to decomposing these into services and low level goals for which
activity specifications may be elaborated. There are similarities however, for despite the absence of
explicit goals in our approach, safety properties may be viewed as maintenance goals and liveness
properties as goals of achievement. The notion of permissions, however, is absent from the KGR
approach, whereas the notion of protocols may be developed to a much greater degree of detail, for
example as in [22]. There protocols are employed as more generic descriptions of behaviour that
may involve entities not modelled as agents, such as the coffee machine.
To summarize the key differences, the KGR approach, by making a commitment to implementation
with a BDI agent architecture, is able to employ an iterative top-down approach to elaborating a set of
models that describe a multi-agent system at both the macro- and micro-level, to make more extensive
use of OO modelling techniques, and to produce executable specifications as its final output. The
approach we have described here is a mixed top-down and bottom-up approach which employs a
more fine-grained and diverse set of generic models to capture the result of the analysis and design
process, and tries to avoid any premature commitment, either architectural, or as to the detailed
design and implementation process which will follow. We envisage, however, that our approach can
be suitably specialized for specific agent architectures or implementation techniques; this is a subject
for further research.
7. Conclusions and Further Work
In this article, we have described Gaia, a methodology for the analysis and design of agent-based
systems. The key concepts in Gaia are roles, which have associated with them responsibilities,
permissions, activities, and protocols. Roles can interact with one another in certain institutionalised
ways, which are defined in the protocols of the respective roles.
There are several issues remaining for future work.
ffl Self-Interested Agents.
Gaia does not explicitly attempt to deal with systems in which agents may not share common
goals. This class of systems represents arguably the most important application area for multi-agent
systems, and it is therefore essential that a methodology should be able to deal with it.
ffl Dynamic and open systems.
Open systems - in which system components may join and leave at run-time, and which may
be composed of entities that a designer had no knowledge of at design-time - have long been
recognised as a difficult class of system to engineer [15, 13].
ffl Organisation structures.
Another aspect of agent-based analysis and design that requires more work is the notion of an
organisational structure. At the moment, such structures are only implicitly defined within Gaia
- within the role and interaction models. However, direct, explicit representations of such
structures will be of value for many applications. For example, if agents are used to model large
organisations, then these organisations will have an explicitly defined structure. Representing
such structures may be the only way of adequately capturing and understanding the organisa-
tion's communication and control structures. More generally, the development of organisation
design patterns might be useful for reusing successful multi-agent system structures (cf. [12]).
ffl Cooperation Protocols.
The representation of inter-agent cooperation protocols within Gaia is currently somewhat im-
poverished. In future work, we will need to provide a much richer protocol specification frame-work
ffl International Standards.
Gaia was not designed with any particular standard for agent communication in mind (such as
the FIPA agent communication language [11]). However, in the event of widescale industrial
takeup of such standards, it may prove useful to adapt our methodology to be compatible with
such standards.
ffl Formal Semantics.
Finally, we believe that a successful methodology is one that is not only of pragmatic value, but
one that also has a well-defined, unambiguous formal semantics. While the typical developer
need never even be aware of the existence of such a semantics, it is nevertheless essential to have
a precise understanding of what the concepts and terms in a methodology mean [33].
Acknowledgments
This article is a much extended version of [35]. We are grateful to the participants of the Agents 99
conference, who gave us much useful feedback.
Notes
1. In Greek mythology, Gaia was the mother Earth figure. More pertinently, Gaia is the name of an influential
hypothesis put forward by the ecologist James Lovelock, to the effect that all the living organisms on the Earth
can be understood as components of a single entity, which regulates the Earth's environment. The theme of
many heterogeneous entities acting together to achieve a single goal is a central theme in multi-agent systems
research [1], and was a key consideration in the the development of our methodology.
2. To be more precise, we believe such systems will require additional models over and above those that we
outline in the current version of the methodology.
ANALYSIS AND DESIGN 25
3. The third case, which we have not yet elaborated in the methodology, is that a single role represents the
collective behaviour of a number of individuals. This view is important for modelling cooperative and team
problem solving and also for bridging the gap between the micro and the macro levels in an agent-based
system.
4. The most widely used formalism for specifying liveness and safety properties is temporal logic, and in previous
work, the use of such formalism has been strongly advocated for use in agent systems [10]. Although
it has undoubted strengths as a mathematical tool for expressing liveness and safety properties, there is some
doubt about its viability as a tool for use by everyday software engineers. We have therefore chosen an alternative
approach to temporal logic, based on regular expressions, as these are likely to be better understood by
our target audience.
5. For the moment, we do not explicitly model the creation and deletion of roles. Thus roles are persistent
throughout the system's lifetime. In the future, we plan to make this a more dynamic process
--R
Readings in Distributed Artificial Intelligence.
Formal specification of multi-agent systems: a real-world case
Models and methodologies for agent-oriented analysis and design
Commitments: from individual intentions to groups and organizations.
Agent oriented design of a soccer robot team.
A formal specification of dMARS.
On the formal specification and verification of multi-agent systems
The Foundation for Intelligent Physical Agents.
Design Patterns.
Social conceptions of knowledge and action: DAI foundations and open systems semantics.
MACE: A flexible testbed for distributed AI research.
Open information systems semantics for distributed artificial intelligence.
A survey of agent-oriented methodologies
Analysis and design of multiagent systems using MAS-CommonKADS
Organization self design of production systems.
Using ARCHON to develop real-world DAI applications for electricity transportation management and particle acceleration control
Systematic Software Development using VDM (second edition).
The AGENTIS agent interaction model.
Modelling and design of multi-agent systems
A methodology and modelling technique for systems of BDI agents.
The Distributed Multi-Agent Reasoning System Architecture and Language Specification
From agent theory to agent construction: A case study.
Specification and development of reactive systems.
The CONTRACT NET: A formalism for the control of distributed problem solving.
A Framework for Distributed Problem Solving.
The Z Notation (second edition).
Intelligent agents: Theory and practice.
Pitfalls of agent-oriented development
A methodology for agent-oriented analysis and design
--TR
Distributed Artificial Intelligence
Systematic software development using VDM (2nd ed.)
Open information systems semantics for distributed artificial intelligence
Social conceptions of knowledge and action
The Z notation
Agent-oriented programming
Object-oriented development
Object-oriented analysis and design with applications (2nd ed.)
A methodology and modelling technique for systems of BDI agents
Pitfalls of agent-oriented development
A methodology for agent-oriented analysis and design
Organization Self-Design of Distributed Production Systems
Using Archon to Develop Real-World DAI Applications, Part 1
From Agent Theory to Agent Construction
Modelling and Design of Multi-Agent Systems
Analysis and Design of Multiagent Systems Using MAS-Common KADS
A Formal Specification of dMARS
The Agentis Agent Interaction Model
A Survey of Agent-Oriented Methodologies
A Meta-Model for the Analysis and Design of Organizations in Multi-Agent Systems
--CTR
Antonella Di Stefano , Corrado Santoro, Modeling multi-agent communication contexts, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1, July 15-19, 2002, Bologna, Italy
A. Garcs , R. Quirs , M. Chover , J. Huerta , E. Camahort, A development methodology for moderately open multi-agent systems, Proceedings of the 25th conference on IASTED International Multi-Conference: Software Engineering, p.37-42, February 13-15, 2007, Innsbruck, Austria
Fuhua Shang , Ruishan Du , Yang Li, Agent-Based Soft Computing Society Applied in the Research of Reservoir Sedimentary Facies in Oil Fields, Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology, p.709-712, December 18-22, 2006
Stefan Poslad , Patricia Charlton, Standardizing agent interoperability: the FIPA approach, Mutli-agents systems and applications, Springer-Verlag New York, Inc., New York, NY, 2001
Haralambos Mouratidis , Paolo Giorgini , Gordon Manson, Modelling secure multiagent systems, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Zili Zhang , Chengqi Zhang, Building agent-based hybrid intelligent systems, Design and application of hybrid intelligent systems, IOS Press, Amsterdam, The Netherlands,
Toacy C. Oliveira , Paulo Alencar , Don Cowan , Carlos Lucena, xTAO: enabling a declarative approach to the specification of multi-agent systems, ACM SIGSOFT Software Engineering Notes, v.30 n.4, July 2005
Clemens Fritschi , Klaus Dorer, Agent-oriented software engineering for successful TAC participation, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1, July 15-19, 2002, Bologna, Italy
Carles Sierra , Jordi Sabater , Jaume Agust , Pere Garcia, Integrating evolutionary computing and the SADDE methodology, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Pascot, The integrated modeling of multi-agent systems and their environment, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1, July 15-19, 2002, Bologna, Italy
Adam L. Berger , Robert R. Kessler, Modifying agent systems for an open, dynamic agent environment, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Arnon Sturm , Onn Shehory, Towards industrially applicable modeling technique for agent-based systems, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1, July 15-19, 2002, Bologna, Italy
Gao Bo , Fei Qi , Chen Xueguang, Exploiting object-oriented methodologies to model MAS on organizations, ACM SIGSOFT Software Engineering Notes, v.27 n.1, p.58-62, January 2002
Paul Davidsson , Fredrik Wernstedt, A multi-agent system architecture for coordination of just-in-time production and distribution, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain
Fausto Giunchiglia , John Mylopoulos , Anna Perini, The tropos software development methodology: processes, models and diagrams, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1, July 15-19, 2002, Bologna, Italy
Prabhat Ranjan , A. K. Misra, Agent based system development: a domain-specific goal approach, ACM SIGSOFT Software Engineering Notes, v.31 n.6, November 2006
Abdul S. Shibghatullah , Tillal Eldabi , Jasna Kuljis, A proposed multiagent model for bus crew scheduling, Proceedings of the 37th conference on Winter simulation, December 03-06, 2006, Monterey, California
Lin Padgham , Michael Winikoff, Prometheus: a methodology for developing intelligent agents, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1, July 15-19, 2002, Bologna, Italy
Vladimir Gorodetsky , Oleg Karsaev , Victor Konushy , Wolf-Ekkehard Matzke , Eyck Jentzsch , Vadim Ermolayev, Multi-agent Software Tool for Management of Design Process in Microelectronics, Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology, p.773-776, December 18-22, 2006
Tarek Jarraya , Zahia Guessoum, Reuse Interaction Protocols to Develop Interactive Agents, Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology, p.411-415, December 18-22, 2006
Jie Xing , Munindar P. Singh, Engineering commitment-based multiagent systems: a temporal logic approach, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Xinjun Mao , Jiajia Chen, Model organization constraints in multi-agent system, Intelligent information processing II, Springer-Verlag, London, 2004
Prabhat Ranjan , A. K. Misra, A hybrid model for agent based system requirements analysis, ACM SIGSOFT Software Engineering Notes, v.31 n.3, May 2006
Riza Cenk Erdur , Ouz Dikenelli, A multi-agent system infrastructure for software component market-place: an ontological perspective, ACM SIGMOD Record, v.31 n.1, March 2002
Alessandro Garcia , Christina Chavez , Ricardo Choren, Enhancing agent-oriented models with aspects, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
Stefania Bandini , Sara Manzoni , Giuseppe Vizzari, A multi-agent system for remote psychological profiling with role playing games based tests, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Metamodeling techniques applied to the design of reconfigurable control applications, EURASIP Journal on Embedded Systems, v.2008 n.2, p.1-9, April 2008
Role-assignment in open agent societies, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Jeroen W.J. Gijsen , Nicholas B. Szirbik , Gerd Wagner, Agent Technologies for Virtual Enterprises in the One-of-a-Kind-Production Industry, International Journal of Electronic Commerce, v.7 n.1, p.9-34, Number 1/Fall 2002
Maged N. Kamel Boulos , Qiang Cai , Julian A. Padget , Gerard Rushton, Using software agents to preserve individual health data confidentiality in micro-scale geographical analyses, Journal of Biomedical Informatics, v.39 n.2, p.160-170, April 2006
Paul Davidsson , Fredrik Wernstedt, Embedded Agents for District Heating Management, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.1148-1155, July 19-23, 2004, New York, New York
Holger Knublauch, Extreme programming of multi-agent systems, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2, July 15-19, 2002, Bologna, Italy
Athie L. Self , Scott A. DeLoach, Designing and specifying mobility within the multiagent systems engineering methodology, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Haiping Xu , Sol M. Shatz, A Framework for Model-Based Design of Agent-Oriented Software, IEEE Transactions on Software Engineering, v.29 n.1, p.15-30, January
Danny Weyns , Tom Holvoet , Kurt Schelfthout, Multiagent systems as software architecture: another perspective on software engineering with multiagent systems, Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems, May 08-12, 2006, Hakodate, Japan
Paolo Bresciani , Anna Perini , Paolo Giorgini , Fausto Giunchiglia , John Mylopoulos, A knowledge level software engineering methodology for agent oriented programming, Proceedings of the fifth international conference on Autonomous agents, p.648-655, May 2001, Montreal, Quebec, Canada
T. Y. Chen , Iyad Rahwan , Yun Yang, Temporal interaction diagrams for multi-process environments, Practicing software engineering in the 21st century, Idea Group Publishing, Hershey, PA,
Manuel Alfonseca , Juan de Lara, Simulating evolutionary agent communities with OOCSMP, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain
Steve Munroe , Simon Miles , Luc Moreau , Javier Vzquez-Salceda, PrIMe: a software engineering methodology for developing provenance-aware applications, Proceedings of the 6th international workshop on Software engineering and middleware, November 10-10, 2006, Portland, Oregon
Wenpin Jiao , John Debenham , Brian Henderson-Sellers, Organizational models and interaction patterns for use in the analysis and design of multi-agent systems, Web Intelligence and Agent System, v.3 n.2, p.67-83, April 2005
Mehdi Dastani , Joris Hulstijn , Frank Dignum , John-Jules Ch. Meyer, Issues in Multiagent System Development, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.922-929, July 19-23, 2004, New York, New York
Thomas Juan , Adrian Pearce , Leon Sterling, ROADMAP: extending the gaia methodology for complex open systems, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1, July 15-19, 2002, Bologna, Italy
Anthony Karageorgos , Simon Thompson , Nikolay Mehandjiev, Semi-automatic design of agent organisations, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain
Ralph Depke , Reiko Heckel , Jochen M. Kuster, Improving the agent-oriented modeling process by roles, Proceedings of the fifth international conference on Autonomous agents, p.640-647, May 2001, Montreal, Quebec, Canada
Agent architecture for agent-based supply chain integration & coordination, ACM SIGSOFT Software Engineering Notes, v.28 n.4, July
Thomas Juan , Leon Sterling , Maurizio Martelli , Viviana Mascardi, Customizing AOSE methodologies by reusing AOSE features, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Giacomo Cabri , Luca Ferrari , Letizia Leonardi, Applying security policies through agent roles: a JAAS based approach, Science of Computer Programming, v.59 n.1-2, p.127-146, January 2006
D. A. W. Clarke, Commercial Experience with Agent-Oriented Software Engineering, Proceedings of the IEEE/WIC/ACM international conference on Intelligent Agent Technology, p.730-736, December 18-22, 2006
Joaqun Pea , Rafael Corchuelo , Jos L. Arjona, A top down approach for MAS protocol descriptions, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Weiming Zhang, An intelligent agent-based cooperative information processing model, Information management: support systems & multimedia technology, Idea Group Publishing, Hershey, PA,
Jorge J. Gmez-Sanz , Juan Pavn , Francisco Garijo, Meta-models for building multi-agent systems, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain
Josh Dehlinger , Robyn R. Lutz, A product-line requirements approach to safe reuse in multi-agent systems, ACM SIGSOFT Software Engineering Notes, v.30 n.4, July 2005
Vladimir Gorodetski , Oleg Karsaev , Vladimir Samoilov , Victor Konushy , Evgeny Mankov , Alexey Malyshev, Multi-agent system development kit: MAS software tool implementing Gaia methodology, Intelligent information processing II, Springer-Verlag, London, 2004
Onn Shehory , Arnon Sturm, Evaluation of modeling techniques for agent-based systems, Proceedings of the fifth international conference on Autonomous agents, p.624-631, May 2001, Montreal, Quebec, Canada
Alexander Artikis , Jeremy Pitt , Marek Sergot, Animated specifications of computational societies, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 3, July 15-19, 2002, Bologna, Italy
Fabien Gandon , Laurent Berthelot , Rose Dieng-Kuntz, A multi-agent platform for a corporate semantic web, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 3, July 15-19, 2002, Bologna, Italy
Maria Fasli, On the relationship between roles and power: preliminary report, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
J.-P. Jamont , M. Occello, Un intergiciel par envoi de message conome en nergie bas sur une approche multi-agents: support pour la communication sans fil dans les systmes complexes physiques ouverts, Proceedings of the 2nd French-speaking conference on Mobility and uibquity computing, May 31-June 03, 2005, Grenoble, France
Martin J. Kollingbaum , Timothy J. Norman, Supervised interaction: creating a web of trust for contracting agents in electronic environments, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 1, July 15-19, 2002, Bologna, Italy
David Poutakidis , Lin Padgham , Michael Winikoff, Debugging multi-agent systems using design artifacts: the case of interaction protocols, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2, July 15-19, 2002, Bologna, Italy
Nico Janssens , Elke Steegmans , Tom Holvoet , Pierre Verbaeten, An agent design method promoting separation between computation and coordination, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
Chris J. van Aart , Bob Wielinga , Guus Schreiber, Organizational building blocks for design of distributed intelligent system, International Journal of Human-Computer Studies, v.61 n.5, p.567-599, November 2004
Antonella Di Stefano , Giuseppe Pappalardo , Corrado Santoro , Emiliano Tramontana, A framework for the design and automated implementation of communication aspects in multi-agent systems, Journal of Network and Computer Applications, v.30 n.3, p.1136-1152, August, 2007
Michelle Casagni , Margaret Lyell, Comparison of two component frameworks: the FIPA-compliant multi-agent system and the web-centrie J2EE platform, Proceedings of the 25th International Conference on Software Engineering, May 03-10, 2003, Portland, Oregon
Pter Egri , Jzsef Vncza, Cooperative production networks: multiagent modeling and planning, Acta Cybernetica, v.18 n.2, p.223-238, January 2007
Arnon Sturm , Dov Dori , Onn Shehory, Single-model method for specifying multi-agent systems, Proceedings of the second international joint conference on Autonomous agents and multiagent systems, July 14-18, 2003, Melbourne, Australia
Paul Davidsson , Fredrik Wernstedt, A multi-agent system architecture for coordination of just-in-time production and distribution, The Knowledge Engineering Review, v.17 n.4, p.317-329, December 2002
Davide Grossi , Lambr Royakkers , Frank Dignum, Organizational structure and responsibility: an analysis in a dynamic logic of organized collective agency, Artificial Intelligence and Law, v.15 n.3, p.223-249, September 2007
Federico Bergenti , Agostino Poggi, A development toolkit to realize autonomous and interoperable agents, Proceedings of the fifth international conference on Autonomous agents, p.632-639, May 2001, Montreal, Quebec, Canada
Antonella Di Stefano , Corrado Santoro , Giuseppe Pappalardo , Emiliano Tramontana, Enforcing agent communication laws by means of a reflective framework, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
Carole Bernon , Massimo Cossentino , Juan Pavn, Agent-oriented software engineering, The Knowledge Engineering Review, v.20 n.2, p.99-116, June 2005
Olga Pacheco , Jos Carmo, A Role Based Model for the Normative Specification of Organized Collective Agency and Agents Interaction, Autonomous Agents and Multi-Agent Systems, v.6 n.2, p.145-184, March
Fabien Gandon, Agents handling annotation distribution in a corporate semantic web, Web Intelligence and Agent System, v.1 n.1, p.23-45, January
Fabien Gandon, Agents handling annotation distribution in a corporate semantic Web, Web Intelligence and Agent System, v.1 n.1, p.23-45, January
Gerd Wagner, The agent-object-relationship metamodel: towards a unified view of state and behavior, Information Systems, v.28 n.5, p.475-504, July
Paolo Bresciani , Anna Perini , Paolo Giorgini , Fausto Giunchiglia , John Mylopoulos, Tropos: An Agent-Oriented Software Development Methodology, Autonomous Agents and Multi-Agent Systems, v.8 n.3, p.203-236, May 2004
Ralph Depke , Reiko Heckel , Jochen Malte Kster, Formal agent-oriented modeling with UML and graph transformation, Science of Computer Programming, v.44 n.2, p.229-252, August 2002
Ioannis N. Athanasiadis , Alexandros K. Mentes , Pericles A. Mitkas , Yiannis A. Mylopoulos, A Hybrid Agent-Based Model for Estimating Residential Water Demand, Simulation, v.81 n.3, p.175-187, March 2005
Javier Vzquez-Salceda , Virginia Dignum , Frank Dignum, Organizing Multiagent Systems, Autonomous Agents and Multi-Agent Systems, v.11 n.3, p.307-360, November 2005
Haiping Xu , Sol M. Shatz, ADK: An Agent Development Kit Based on a Formal Design Model for Multi-Agent Systems, Automated Software Engineering, v.10 n.4, p.337-365, October
Virginia Dignum , Frank Dignum , John-Jules Meyer, An agent-mediated approach to the support of knowledge sharing in organizations, The Knowledge Engineering Review, v.19 n.2, p.147-174, June 2004
Anthony Karageorgos , Simon Thompson , Nikolay Mehandjiev, Agent-Based System Design for B2B Electronic Commerce, International Journal of Electronic Commerce, v.7 n.1, p.59-90, Number 1/Fall 2002
Alfredo Garro , Luigi Palopoli , Francesco Ricca, Exploiting agents in e-learning and skills management context, AI Communications, v.19 n.2, p.137-154, April 2006
Alfredo Garro , Luigi Palopoli , Francesco Ricca, Exploiting agents in e-learning and skills management context, AI Communications, v.19 n.2, p.137-154, January 2006
M. Georgeff , N. Azarmi, What Has AI Done for Us?, BT Technology Journal, v.21 n.4, p.15-22,
Rene Elio , Anita Petrinjak, Normative Communication Models for Agent, Autonomous Agents and Multi-Agent Systems, v.11 n.3, p.273-305, November 2005
Carles Sierra, Agent-Mediated Electronic Commerce, Autonomous Agents and Multi-Agent Systems, v.9 n.3, p.285-301, November 2004
Manolis Koubarakis , Dimitris Plexousakis, A formal framework for business process modelling and design, Information Systems, v.27 n.5, p.299-319, July 2002
Jaelson Castro , Manuel Kolp , John Mylopoulos, Towards requirements-driven information systems engineering: the Tropos project, Information Systems, v.27 n.6, p.365-389, September 2002
Marie-Pierre Gervais, ODAC: An Agent-Oriented Methodology Based on ODP, Autonomous Agents and Multi-Agent Systems, v.7 n.3, p.199-228, November
Manuel Kolp , Paolo Giorgini , John Mylopoulos, Multi-Agent Architectures as Organizational Structures, Autonomous Agents and Multi-Agent Systems, v.13 n.1, p.3-25, July 2006
Franco Zambonelli , Nicholas R. Jennings , Michael Wooldridge, Developing multiagent systems: The Gaia methodology, ACM Transactions on Software Engineering and Methodology (TOSEM), v.12 n.3, p.317-370, July
Chihab Hanachi , Christophe Sibertin-Blanc, Protocol Moderators as Active Middle-Agents in Multi-Agent Systems, Autonomous Agents and Multi-Agent Systems, v.8 n.2, p.131-164, March 2004
Anthony Karageorgos , Nikolay Mehandjiev , Simon Thompson, RAMASD: a semi-automatic method for designing agent organisations, The Knowledge Engineering Review, v.17 n.4, p.331-358, December 2002
Ofer Arazy , Carson C. Woo, Analysis and design of agent-oriented information systems, The Knowledge Engineering Review, v.17 n.3, p.215-260, September 2002
Franco Zambonelli , Andrea Omicini, Challenges and Research Directions in Agent-Oriented Software Engineering, Autonomous Agents and Multi-Agent Systems, v.9 n.3, p.253-283, November 2004
Michael Luck , Peter McBurney , Chris Preist, A Manifesto for Agent Technology: Towards Next Generation Computing, Autonomous Agents and Multi-Agent Systems, v.9 n.3, p.203-252, November 2004
Rajiv Kishore , Hong Zhang , R. Ramesh, Enterprise integration using the agent paradigm: foundations of multi-agent-based integrative business information systems, Decision Support Systems, v.42 n.1, p.48-78, October 2006
Gerhard Wei, Agent orientation in software engineering, The Knowledge Engineering Review, v.16 n.4, p.349-373, December 2001 | methodologies;analysis and design;agent-oriented;software engineering |
608666 | Hierarchical Wrapper Induction for Semistructured Information Sources. | With the tremendous amount of information that becomes available on the Web on a daily basis, the ability to quickly develop information agents has become a crucial problem. A vital component of any Web-based information agent is a set of wrappers that can extract the relevant data from semistructured information sources. Our novel approach to wrapper induction is based on the idea of hierarchical information extraction, which turns the hard problem of extracting data from an arbitrarily complex document into a series of simpler extraction tasks. We introduce an inductive algorithm, STALKER, that generates high accuracy extraction rules based on user-labeled training examples. Labeling the training data represents the major bottleneck in using wrapper induction techniques, and our experimental results show that STALKER requires up to two orders of magnitude fewer examples than other algorithms. Furthermore, STALKER can wrap information sources that could not be wrapped by existing inductive techniques. | Introduction
With the Web, computer users have gained access to a large variety of
comprehensive information repositories. However, the Web is based on
a browsing paradigm that makes it difficult to retrieve and integrate
data from multiple sources. The most recent generation of information
agents (e.g., WHIRL (Cohen, 1998), Ariadne (Knoblock et al., 1998),
and Information Manifold (Kirk et al., 1995) ) address this problem
by enabling information from pre-specified sets of Web sites to be
accessed via database-like queries. For instance, consider the query
"What seafood restaurants in L.A. have prices below $20 and accept
the Visa credit-card?" Assume that we have two information sources
that provide information about LA restaurants: the Zagat Guide and
LA Weekly (see Figure 1). To answer this query, an agent could use
Zagat's to identify seafood restaurants under $20 and then use LA
Weekly to check which of these accept Visa.
c
Publishers. Printed in the Netherlands.
Ion Muslea, Steven Minton, Craig A. Knoblock
Information agents generally rely on wrappers to extract information
from semistructured Web pages (a document is semistructured if the
location of the relevant information can be described based on a concise,
formal grammar). Each wrapper consists of a set of extraction rules
and the code required to apply those rules. Some systems, such as
tsimmis (Chawathe et al., 1994) and araneus (Atzeni et al., 1997)
depend on humans to write the necessary grammar rules. However,
there are several reasons why this is undesirable. Writing extraction
rules is tedious, time consuming and requires a high level of expertise.
These difficulties are multiplied when an application domain involves a
large number of existing sources or the format of the source documents
changes over time.
In this paper, we introduce a new machine learning method for
wrapper construction that enables unsophisticated users to painlessly
turn Web pages into relational information sources. The next section
presents a formalism describing semistructured Web documents, and
then Sections 3 and 4 present a domain-independent information extractor
that we use as a skeleton for all our wrappers. Section 5 describes
stalker, a supervised learning algorithm for inducing extraction
rules, and Section 6 presents a detailed example. The final sections
describe our experimental results, related work and conclusions.
2. Describing the Content of a Page
Because Web pages are intended to be human readable, there are some
common conventions for structuring HTML documents. For instance,
the information on a page often exhibits some hierarchical structure;
furthermore, semistructured information is often presented in the form
of lists of tuples, with explicit separators used to distinguish the different
elements. With these observations in mind, we developed the
embedded catalog (EC) formalism, which can describe the structure of
a wide-range of semistructured documents.
The EC description of a page is a tree-like structure in which the
leaves are the items of interest for the user (i.e., they represent the
relevant data). The internal nodes of the EC tree represent lists of k-tuples
(e.g., lists of restaurant descriptions), where each item in the
k-tuple can be either a leaf l or another list L (in which case L is called
an embedded list). For instance, Figure 2 displays the EC descriptions
of the LA-Weekly and Zagat pages. At the top level, an LA-Weekly
page is a list of 5-tuples that contain the name, address, phone,
review, and an embedded list of credit cards. Similarly, a Zagat
document can be seen as a 7-tuple that includes a list of addresses,
Hierarchical Wrapper Induction for Semistructured Information Sources 3
Figure
1. LA-Weekly and Zagat's Restaurant Descriptions
name address phone review
credit_card
ZAGAT Document
name food decor service cost LIST( Addresses ) review
street city area-code phone-number
Figure
2. EC description of LA-Weekly and ZAGAT pages.
where each individual address is a 4-tuple street, city, area-code,
and phone-number.
3. Extracting Data from a Document
In order to extract the items of interest, a wrapper uses the EC description
of the document and a set of extraction rules. For each node in the
tree, the wrapper needs a rule that extracts that particular node from
its parent. Additionally, for each list node, the wrapper requires a list
iteration rule that decomposes the list into individual tuples. Given the
EC tree and the rules, any item can be extracted by simply determining
the path P from the root to the corresponding node and by successively
extracting each node in P from its parent. If the parent of a node x is a
list, the wrapper applies first the list iteration rule and then it applies
x's extraction rule to each extracted tuple.
In our framework a document is a sequence of tokens S (e.g., words,
tags, etc). It follows that the content of the root node
in the EC tree is the whole sequence S, while the content of each of
paper.tex; 19/11/1999; 16:12; p.
4 Ion Muslea, Steven Minton, Craig A. Knoblock
1: !p? Name: !b? Yala !/b?!p? Cuisine: Thai !p?!i?
2: 4000 Colfax, Phoenix, AZ 85258 (602) 508-1570
3: !/i? !br? !i?
4: 523 Vernon, Las Vegas, NV 89104 (702) 578-2293
5: !/i? !br? !i?
7: !/i?
Figure
3. A simplified version of a Zagat document.
its children is a subsequence of S. More generally, the content of an
arbitrary node x represents a subsequence of the content of its parent
p. A key idea underlying our work is that the extraction rules can be
based on "landmarks" (i.e., groups of consecutive tokens) that enable
a wrapper to locate the content of x within the content of p.
For instance, let us consider the restaurant descriptions presented
in
Figure
3. In order to identify the beginning of the restaurant name,
we can use the rule
which has the following meaning: start from the beginning of the document
and skip everything until you find the !b? landmark. More
formally, the rule R1 is applied to the content of the node's parent,
which in this particular case is the whole document; the effect of applying
consists of consuming the prefix of the parent, which ends
at the beginning of the restaurant name. Similarly, one can identify
the end of a node's content by applying a rule that consumes the
corresponding suffix of the parent. For instance, in order to find the
end of the restaurant name, one can apply the rule
from the end of the document towards its beginning.
The rules R1 and are called start and end rules, and, in most of
the cases, they are not unique. For instance, instead of R1 we can use
or
Hierarchical Wrapper Induction for Semistructured Information Sources 5
R3 has the meaning "ignore everything until you find a Name landmark,
and then, again, ignore everything until you find !b?", while R4 is
interpreted as "ignore all tokens until you find a 3-token landmark
that consists of the token Name, immediately followed by a punctuation
symbol and an HTML tag." As the rules above successfully identify
the start of the restaurant name, we say that they match correctly.
By contrast, the start rules SkipTo(:) and SkipTo(!i?) are said to
match incorrectly because they consume too few or too many tokens,
respectively (in stalker terminology, the former is an early match,
while the later is a late match). Finally, a rule like SkipTo(!table?)
fails because the landmark !table? does not exist in the document.
To deal with variations in the format of the documents, our extraction
rules allow the use of disjunctions. For example, if the names
of the recommended restaurants appear in bold, while the other ones
are displayed as italic, one can extract all the names based on the
disjunctive start and end rules
either SkipTo(!b?)
or SkipTo(!i?)
and
either SkipTo(!/b?)
or SkipTo(Cuisine)SkipTo(!/i?)
Disjunctive rules, which represent a special type of decision lists (Rivest,
1987), are ordered lists of individual disjuncts. Applying a disjunctive
rule is a straightforward process: the wrapper succesively applies each
disjunct in the list until it finds the first one that matches (see more
details in the next section's footnote).
To illustrate how the extraction process works for list members,
consider the case where the wrapper has to extract all the area codes
from the sample document in Figure 3 . In this case, the wrapper starts
by extracting the entire list of addresses, which can be done based on
the start rule SkipTo(!p?!i?) and the end rule SkipTo(!/i?). Then
the wrapper has to iterate through the content of list of addresses (lines
2-6 in
Figure
and to break it into individual tuples. In order to find
the start of each individual address, the wrapper starts from the first
token in the parent and repeatedly applies SkipTo(!i?) to the content
of the list (each successive rule-matching starts at the point where
the previous one ended). Similarly, the wrapper determines the end of
each Address tuple by starting from the last token in the parent and
repeatedly applying the end rule SkipTo(!/i?). In our example, the
paper.tex; 19/11/1999; 16:12; p.
6 Ion Muslea, Steven Minton, Craig A. Knoblock
list iteration process leads to the creation of three individual addresses
that have the contents shown on the lines 2, 4, and 6, respectively.
Then the wrapper applies to each address the area-code start and end
rule (e.g., SkipTo( '(' ) and SkipTo( ')' ), respectively).
Now let us assume that instead of the area codes, the wrapper has
to extract the ZIP Codes. The list extraction and the list iteration
remain unchanged, but the ZIP Code extraction is more difficult because
there is no landmark that separates the state from the ZIP Code.
Even though in such situations the SkipTo() rules are not sufficiently
expressive, they can be easily extended to a more powerful extraction
language. For instance, we can use
to extract the ZIP Code from the entire address. The argument of
SkipUntil() describes a prefix of the content of the item to be extracted,
and it is not consumed when the rule is applied (i.e., the rule stops immediately
before its occurrence). The rule R5 means "ignore all tokens
until you find the landmark ',', and then ignore everything until you
find, but do not consume, a number". Rules like R5 are extremely
useful in practice, and they represent only variations of our SkipTo()
rules (i.e., the last landmark has a special meaning). In order to keep the
presentation simple, the rest of the paper focuses mainly on SkipTo()
rules. When necessary, we will explain the way in which we handle the
construct.
The extraction rules presented in this section have two main advan-
tages. First of all, the hierarchical extraction based on the EC tree
allows us to wrap information sources that have arbitrary many levels
of embedded data. Second, as each node is extracted independently of
its siblings, our approach does not rely on there being a fixed ordering
of the items, and we can easily handle extraction tasks from documents
that may have missing items or items that appear in various
orders. Consequently, in the context of using an inductive algorithm
that generates the extraction rules, our approach turns an extremely
hard problem into several simpler ones: rather then finding a single
extraction rule that takes into account all possible item orderings and
becomes more complex as the depth of the EC tree increases, we create
several simpler rules that deal with the easier task of extracting each
item from its EC tree parent.
Hierarchical Wrapper Induction for Semistructured Information Sources 7
4. Extraction Rules as Finite Automata
We now introduce two key concepts that can be used to define extraction
rules: landmarks and landmark automata. In the rules described
in the previous section, each argument of a SkipTo() function is a
landmark, while a group of SkipTo() functions that must be applied in
a pre-established order represents a landmark automaton. In our frame-
work, a landmark is a sequence of tokens and wildcards (a wildcard
represents a class of tokens, as illustrated in the previous section, where
we used wildcards like Number and HtmlTag). Such landmarks are
interesting for two reasons: on one hand, they are sufficiently expressive
to allow efficient navigation within the EC structure of the documents,
and, on the other hand, as we will see in the next section, there is a
simple way to generate and refine them.
Landmark automata (LAs) are nondeterministic finite automata in
which each transition S i
(i 6= j) is labeled by a landmark l i;j
that is, the transition
l i;j
takes place if the automaton is in the state S i
and the landmark l i;j
matches the sequence of tokens at the input. Linear landmark automata
are a class of LAs that have the following properties:
- a linear LA has a single accepting state;
- from each non-accepting state, there are exactly two possible transi-
tions: a loop to itself, and a transition to the next state;
- each non-looping transition is labeled by a landmarks;
looping transitions have the meaning "consume all tokens until
you encounter the landmark that leads to the next state".
The extraction rules presented in the previous section are ordered lists
of linear LAs. In order to apply such a rule to a given sequence of
tokens S, we apply the linear LAs to S in the order in which they
appear in the list. As soon as we find an LA that matches within S,
we stop the matching process 1 .
Disjunctive iteration rules are applied in a slightly different manner. As we
already said, iteration rules are applied repeatedly on the content of the whole
list. Consequently, by blindly selecting the first matching disjunct, there is a risk
of skipping over several tuples until we find the first tuple that can be extracted
based on that particular disjunct! In order to avoid such problems, a wrapper that
uses a disjunctive iteration rule R applies the first disjunct D in R that fulfills the
paper.tex; 19/11/1999; 16:12; p.
8 Ion Muslea, Steven Minton, Craig A. Knoblock
E2: 90 Colfax, !b? Palms !/b?, Phone: ( 818 ) 508-1570
E3: 523 1st St., !b? LA !/b?, Phone: 1-!b? 888 !/b?-578-2293
E4: 403 La Tijera, !b? Watts !/b?, Phone: ( 310 ) 798-0008
Figure
4. Four examples of restaurant addresses.
In the next section we present the stalker inductive algorithm that
generates rules that identify the start and end of an item x within its
parent p. Note that finding a start rule that consumes the prefix of p
with respect to x (for short Prefix x (p)) is similar to finding an end
rule that consumes the suffix of p with respect to x (i.e., Suffix x (p));
in fact, the only difference between the two types of rules consists of
how they are actually applied: the former starts by consuming the first
token in p and goes towards the last one, while the later starts at the
last token in p and goes towards the first one. Consequently, without
any loss of generality, in the rest of this paper we discuss only the way
in which stalker generates start rules.
5. Learning Extraction Rules
The input to stalker consists of sequences of tokens representing the
prefixes that must be consumed by the induced rule. To create such
training examples, the user has to select a few sample pages and to use
a graphical user interface (GUI) to mark up the relevant data (i.e., the
leaves of the EC tree); once a page is marked up, the GUI generates the
sequences of tokens that represent the content of the parent p, together
with the index of the token that represents the start of x and uniquely
identifies the prefix to be consumed.
Before describing our rule induction algorithm, we will present an
illustrative example. Let us assume that the user marked the four area
codes from Figure 4 and invokes stalker on the corresponding four
training examples (that is, the prefixes of the addresses E1, E2, E3,
and E4 that end immediately before the area code). stalker, which
is a sequential covering algorithm, begins by generating a linear LA
following two criteria. First, D matches within the content of the list. Second, any
two disjuncts D1 and D2 in R that are applied in succession either fail to match, or
match later than D (i.e., one can not generate more tuples by using a combination
of two or more other disjuncts).
Hierarchical Wrapper Induction for Semistructured Information Sources 9
(remember that each such LA represents a disjunct in the final rule)
that covers as many as possible of the four positive examples. Then it
tries to create another linear LA for the remaining examples, and so
on. Once stalker covers all examples, it returns the disjunction of all
the induced LAs. In our example, the algorithm generates first the rule
which has two important properties:
it accepts the positive examples in E2 and E4;
- it rejects both E1 and E3 because D1 can not be matched on them.
During a second iteration, the algorithm considers only the uncovered
examples E1 and E3, based on which it generates the rule
As there are no other uncovered examples, stalker returns the disjunctive
rule either D1 or D2.
To generate a rule that extracts an item x from its parent p, stalker
invokes the function LearnRule() (see Figure 5). This function takes
as input a list of pairs (T i
each sequence of tokens T i
is
the content of an instance of p, and T i
is the token that represents
the start of x within p. Any sequence S ::= T
(i.e., any instance of Prefix x
(p)) represents a positive example, while
any other sub-sequence or super-sequence of S represents a negative
example. stalker tries to generate a rule that accepts all positive
examples and rejects all negative ones.
stalker is a typical sequential covering algorithm: as long as there
are some uncovered positive examples, it tries to learn a perfect disjunct
(i.e., a linear LA that accepts only true positives). When all the positive
examples are covered, stalker returns the solution, which consists
of an ordered list of all learned disjuncts. The ordering is performed
by the function OrderDisjuncts() and is based on a straightforward
heuristic: the disjuncts with fewer early and late matches should appear
in case of a tie, the disjuncts with more correct matches are
preferred to the other ones.
The function LearnDisjunct() is a greedy algorithm for learning
perfect disjuncts: it generates an initial set of candidates and repeatedly
selects and refines the best refining candidate until it either finds a
perfect disjunct, or runs out of candidates. Before returning a learned
disjunct, stalker invokes PostProcess(), which tries to improve the
quality of the rule (i.e., it tries to reduce the chance that the disjunct
will match a random sequence of tokens). This step is necessary because
during the refining process each disjunct is kept as general as possible
in order to potentially cover a maximal number of examples; once the
paper.tex; 19/11/1999; 16:12; p.
Ion Muslea, Steven Minton, Craig A. Knoblock
al be an empty rule
aDisjunct =LearnDisjunct(Examples)
- remove all examples covered by aDisjunct
add aDisjunct to RetV al
return OrderDisjuncts(RetV al)
Examples be the shortest example
return PostProcess(BestSolution)
consist on the consecutive landmarks l 1 , l 2 , \Delta \Delta \Delta, l n
be the number of tokens in l i
- FOR EACH token t in Seed DO
- in a copy Q of C, add the 1-token landmark t between l i and l i+1
create one such rule for each wildcard that matches t
- add all these new rules to T opologyRefs
in Seed DO
in P , replace l i by t 0 l i
- in Q, replace l i by l i t m+1
create similar rules for each wildcard that matches t 0 and t m+1
- add both P and Q to LandmarkRefs
Figure
5. The stalker algorithm.
Hierarchical Wrapper Induction for Semistructured Information Sources 11
refining ends, we post-process the disjunct in order to minimize its
potential interference with other disjuncts 2 .
Both the initial candidates and their refined versions are generated
based on a seed example, which is the shortest uncovered example (i.e.,
the example with the smallest number of tokens in Prefix x
(p)). For
each token t that ends the seed example and for each wildcard w i that
"matches" t, stalker creates an initial candidate that is a 2-state LA.
In each such automaton, the transition S 0 labeled by a landmark
that is either t or one of the wildcards w i . The rationale behind
this choice is straightforward: as disjuncts have to completely consume
each positive example, it follows that any disjunct that consumes a
t-ended prefix must end with a landmark that consumes the trailing t.
Before describing the actual refining process, let us present the main
intuition behind it. If we reconsider now the four training examples
in
Figure
4, we see that stalker starts with the initial candidate
SkipTo((), which is a perfect disjunct; consequently, stalker removes
the covered examples (E2 and E4) and generates the new initial candidate
R0::=SkipTo(!b?). Note that R0 matches early in both uncovered
examples E1 and E3 (that is, it does not consume the whole
(p)), and, even worse, it also matches within the two already
covered examples! In order to obtain a better disjunct, stalker refines
R0 by adding more terminals to it. During the refining process, we
search for new candidates that consume more tokens from the prefixes
of the uncovered examples and fail on all other examples. By adding
more terminals to a candidate, we hope that its refined versions will
eventually turn the early matches into correct ones, while the late
matches 3 , together with the ones on the already covered examples, will
become failed matches. This is exactly what happens when we refine
R0 into the new rule does not match anymore
on E2 and E4, and R0's early matches on E1 and E3 become correct
matches for R2.
We perform three types of post processing operations: replacing wildcards
with tokens, merging landmarks that match immediately after each other, and
adding more tokens to the short landmarks (e.g., SkipTo(!b?) is likely to match
in most html documents, while SkipTo(Maritime Claims : !b?) matches in significantly
fewer). The last operation has a marginal influence because it improves
the accuracies of only three of the rules discussed in Section 7.
3 As explained in Section 3, a disjunct D that consumes more tokens than
Prefixx(p) is called a late match on p. It is easy to see that by adding more terminals
to D we can not turn it into an early or a correct match (any refined version of D is
guaranteed to consume at least as many tokens as D itself). Consequently, the only
hope to avoid an incorrect match of D on p is to keep adding terminals until it fails
to match on p.
12 Ion Muslea, Steven Minton, Craig A. Knoblock
The Refine() function in Figure 5 tries to obtain (potentially) better
disjuncts either by making its landmarks more specific (landmark
refinements), or by adding new states in the automaton (topology re-
finements). In order to perform a refinement, stalker uses a refining
terminal, which can be either a token or a wildcard (besides the nine
predefined wildcards Anything, Numeric, AlphaNumeric, Alphabetic,
Capitalized, AllCaps, HtmlTag, NonHtml, and Punctuation, stalker
can also use domain specific wildcards that are defined by the user). A
straightforward way to generate the refining terminals consists of using
all the tokens in the seed example, together with the wildcards that
match them. 4 .
Given a disjunct D, a landmark l from D, and a refining terminal t,
a landmark refinement makes l more specific by concatenating t either
at the beginning or at the end of l. By contrast, a topology refinement
adds a new state S and leaves the existing landmarks unchanged. For
instance, if D has a transition
A
l
(i.e., the transition from A to B is labeled by the landmark l), then
given a refining terminal t, a topology refinement creates a new disjunct
in which the transition above is replaced by
A
l
As one might have noted already, LearnDisjunct() uses different
heuristics for selecting the best refining candidate and the best current
solution, respectively. This fact has a straightforward explanation: as
long as we try to further refine a candidate, we do not care how well
it performs the extraction task. In most of the cases, a good refining
candidate matches early on as many as possible of the uncovered ex-
amples; once a refining candidate extracts correctly from some of the
training examples, any further refinements are used mainly to make it
fail on the examples on which it still matches incorrectly.
Both sets of heuristics are described in Figure 6. As we already said,
GetBestRefiner() prefers candidates with a larger potential coverage
(i.e., as many as possible early and correct matches). At equal coverage,
it prefers a candidate with more early matches because, at the intuitive
level, we prefer the most "regular" features in a document: a candidate
4 In the current implementation, stalker uses a more efficient approach: for the
refinement of a landmark l, we use only the tokens from the seed example that are
located after the point where l currently matches within the seed example.
Hierarchical Wrapper Induction for Semistructured Information Sources 13
Prefer candidates that have: Prefer candidates that have:
larger coverage - more correct matches
more early matches - more failures to match
more failed matches - fewer tokens in SkipU ntil()
- fewer wildcards - fewer wildcards
shorter unconsumed prefixes - longer end-landmarks
fewer tokens in SkipU ntil() - shorter unconsumed prefixes
longer end-landmarks
Figure
6. The stalker heuristics.
that has only early matches is based on a regularity shared by all
examples, while a candidate that also has some correct matches creates
a dichotomy between the examples on which the existing landmarks
work perfectly and the other ones. In case of a tie, stalker selects the
disjunct with more failed matches because the alternative would be late
matches, which will have to be eventually turned into failed matches by
further refinements. All things being equal, we prefer candidates that
have fewer wildcards (a wildcard is more likely than a token to match by
pure chance), fewer unconsumed tokens in the covered prefixes (after
all, the main goal is to fully consume each prefix), and fewer tokens
from the content of the slot to be extracted (the main assumption in
wrapper induction is that all documents share the same underlying
structure; consequently, we prefer extraction rules based on the document
template to the ones that rely on the structure of a particular
slot). Finally, the last heuristic consists of selecting the candidate that
has longer landmarks closer to the item to be extracted; that is, we
prefer more specific "local context" landmarks.
In order to pick the best current solution, stalker uses a different
set of criteria. Obviously, it starts by selecting the candidate with the
most correct matches. If there are several such disjuncts, it prefers the
one that fails to match on most of the remaining examples (remem-
ber that the alternatives, early or late matches, represent incorrect
matches!). In case of a tie, for reasons similar to the ones cited above,
we prefer candidates that have fewer tokens from the content of the
item, fewer wildcards, longer landmarks closer to the item's content,
and fewer unconsumed tokens in the covered prefixes (i.e., in case of
incorrect match, the result of the extraction contains fewer irrelevant
tokens).
Finally, stalker can be easily extended such that it also uses
constructs. The rule refining process remains unchanged
(after all, SkipUntil() changes only the meaning of the last landmark
paper.tex; 19/11/1999; 16:12; p.
14 Ion Muslea, Steven Minton, Craig A. Knoblock
in a disjunct), and the only modification involves GenerateInitial-
Candidates(). More precisely, for each terminal t that matches the
first token in an instance of x (including the token itself), stalker
also generates the initial candidates SkipUntil(t).
6. Example of Rule Induction
Let us consider again the restaurant addresses from Figure 4. In order
to generate an extraction rule for the area-code, we invoke stalker
with the training examples fE1, E2, E3, E4g. During the first iteration,
LearnDisjunct() selects the shortest prefix, E2, as seed example. The
last token to be consumed in E2 is "(", and there are two wildcards that
match it: Punctuation and Anything; consequently, stalker creates
three initial candidates:
As R1 is a perfect disjunct 5 , LearnDisjunct() returns R1 and the
first iteration ends.
During the second iteration, LearnDisjunct() is invoked with the
uncovered training examples fE1, E3g; the new seed example is E1,
and stalker creates again three initial candidates:
As all three initial candidates match early in all uncovered examples,
stalker selects R4 as the best possible refiner because it uses no wild-cards
in the landmark. By refining R4, we obtain the three landmark
refinements
Anything !b?)
Hierarchical Wrapper Induction for Semistructured Information Sources 15
R10: SkipT o(Venice) SkipT o(!b?) R17: SkipTo(Numeric) SkipTo(!b?)
R12: SkipT o(:) SkipTo(!b?) R19: SkipTo(HtmlTag) SkipTo(!b?)
R13: SkipT o(-) SkipTo(!b?) R20: SkipTo(AlphaNum) SkipT o(!b?)
R14: SkipT o(,) SkipTo(!b?) R21: SkipTo(Alphabetic) SkipTo(!b?)
R15: SkipT o(Phone) SkipT o(!b?) R22: SkipTo(Capitalized) SkipTo(!b?)
R24: SkipTo(Anything) SkipTo(!b?)
Figure
7. All 21 topology refinements of R4.
along with the 21 topology refinements shown in Figure 7.
At this stage, we have already generated several perfect disjuncts:
R7, R11, R12, R13, R15, R16, and R19. They all match correctly
on E1 and E3, and fail to match on E2 and E4; however, stalker
dismisses R19 because it is the only one using wildcards in its land-
marks. Of the remaining six candidates, R7 represents the best solution
because it has the longest end landmark (all other disjuncts end with a
1-token landmark). Consequently, LearnDisjunct() returns R7, and
because there are no more uncovered examples, stalker completes its
execution by returning the disjunctive rule either R1 or R7.
7. Experimental Results
In order to evaluate stalker's capabilities, we tested it on the information
sources that were used as application domains by wien (Kush-
merick, 1997), which was the first wrapper induction system 6 . To make
the comparison between the two systems as fair as possible, we did
not use any domain specific wildcards, and we tried to follow the exact
experimental conditions used by Kushmerick. For all 21 sources
for which wien had labeled examples, we used the exact same data;
for the remaining 9 sources, we worked closely with Kushmerick to
reproduce the original wien extraction tasks. Furthermore, we also
used wien's experimental setup: we start with one randomly chosen
training example, learn an extraction rule, and test it against all the
unseen examples. We repeated these steps times, and we average
the number of test examples that are correctly extracted. Then we
5 Remember that a perfect disjunct correctly matches at least one example (e.g.,
E2 and E4) and rejects all other ones.
6 All these collections of sample documents, together with a detailed description
of each extraction task, can be obtained from the RISE repository, which is located
at http://www.isi.edu/muslea/RISE/index.html.
Ion Muslea, Steven Minton, Craig A. Knoblock
repeated the same procedure with 2, 3, . , and 10 training examples.
As opposed to wien, we do not train on more than 10 examples because
we noticed that, in practice, a user rarely has the patience of labeling
more than 10 training examples.
This section has four distinct parts. We begin with an overview of
the performance of stalker and wien over the test domains, and we
continue with an analysis of stalker's ability to learn list extraction
and iteration rules, which are key components in our approach to hierarchical
wrapper induction. Then we compare and contrast stalker and
wien based on the number of examples required to wrap the sources,
and we conclude with the main lessons drawn from this empirical
evaluation.
7.1. Overall Comparison of stalker and wien
The data in Table I provides an overview of the two systems' performance
over the sources. The first four columns contain the source
name, whether or not the source has missing items or items that may
appear in various orders, and the number of embedded lists in the EC
tree. The next two columns specify how well the two systems performed:
whether they wrapped the source perfectly, imperfectly, or completely
failed to wrap it. For the time being, let us ignore the last two columns
in the table.
In order to better understand the data from Table I, we have to
briefly describe the type of wrappers that wien generates (a more
technical discussion is provided in the next section). wien uses a fairly
simple extraction language: it does not allow the use of wildcards
and disjunctive rules, and the items in each k-tuple are assumed to
be always present and to always appear in the same order. Based on
the assumptions above, wien learns a unique multi-slot extraction rule
that extracts all the items in a k-tuple at the same time (by contrast,
stalker generates several single-slot rules that extract each item independently
of its siblings in the k-tuple). For instance, in order to
extract all the addresses and area codes from the document in Figure 3,
a hypothetical wien rule does the following: it ignores all characters
until it finds the string "!p?!i?" and extracts as Address everything
until it encounters a "(". Then it immediately starts extracting the
AreaCode, which ends at ")". After extracting such a 2-tuple, the rule
is applied again until it does not match anymore.
Out of the sources, wien wraps perfectly of them, and completely
fails on the remaining 12. These complete failures have a straight-forward
explanation: if there is no perfect wrapper in wien's language
(because, say, there are some missing items), the inductive algorithm
paper.tex; 19/11/1999; 16:12; p.
Hierarchical Wrapper Induction for Semistructured Information Sources 17
Table
I. Test domains for wien and stalker: a dash denotes failure, while
p and ' mean perfectly and imperfectly wrapped, respectively.
SRC Miss Perm Embd wien stalker
ListExtr ListIter
S5 -
Ion Muslea, Steven Minton, Craig A. Knoblock
does not even try to generate an imperfect rule. It is important to
note that wien fails to wrap all sources that include embedded lists
(remember that embedded lists are at least two levels deep) or items
that are missing or appear in various orders.
On the same test domains, stalker wraps perfectly 20 sources and
learns 8 additional imperfect wrappers. Out of these last 8 sources,
in 4 cases stalker generates "high quality" wrappers (i.e., wrappers
in which most of the rules are 100% accurate, and no rule has an
accuracy below 90%). Finally, two of the sources, S21 and S29,
can not be wrapped by stalker. 7 In order to wrap all 28 sources,
stalker induced 206 different rules, out of which 182 (i.e., more than
had 100% accuracy, and another were at least 90% accurate;
in other words, only six rules, which represents 3% of the total, had
an accuracy below 90%. Furthermore, as we will see later, the perfect
rules were usually induced based on just a couple of training examples.
7.2. Learning List Extraction and Iteration Rules
As opposed to wien, which performs an implicit list iteration by repeatedly
applying the same multi-slot extraction rule, stalker learns
explicit list extraction and iteration rules that allow us to navigate
within the EC tree. These types of rules are crucial to our approach
because they allow us to decompose a difficult wrapper induction problem
into several simpler ones in which we always extract one individual
item from its parent. To estimate stalker's performance, we have to
analyze its performance at learning the list extraction and list
iteration rules that appeared in the 28 test domains above.
The results are shown in the last two columns of Table I, where
we provide the number of training examples and the accuracy for each
such rule. Note that there are some sources, like S16, that have no lists
at all. At the other end of the spectrum, there are several sources that
include two lists 8 .
7 The documents in S21 are difficult to wrap because they include a heterogeneous
list (i.e., the list contains elements of several types). As each type of element uses a
different kind of layout, the iteration task is extremely difficult. The second source,
raises a different type of problem: some of the items have just a handful of
occurrences in the collection of documents, and, furthermore, about half of them
represent various types of formatting/semantic errors (e.g., the date appearing in
the location of the price slot, and the actual date slot remaining empty). Under
these circumstances, we decided to declare this source unwrappable by stalker.
8 For sources with multiple lists, we present the data in two different ways. If
all the learned rules are perfect, the results appear on the same table line (e.g., for
S7, the list extraction rules required 6 and 1 examples, respectively, while the list
iteration rules required 2 and 7 examples, respectively). If at least one of the rules
Hierarchical Wrapper Induction for Semistructured Information Sources 19
The results are extremely encouraging: only one list extraction and
two list iteration rules were not learned with a 100% accuracy, and
all these imperfect rules have accuracies above 90%. Furthermore, out
of the 72 rules, 50 of them were learned based on a single training
example! As induction based on a single example is quite unusual in
machine learning, it deserves a few comments. stalker learns a perfect
rule based on a single example whenever one of the initial candidates
is a perfect disjunct. Such situations are frequent in our framework because
the hierarchical decomposition of the problem makes most of the
subproblems (i.e., the induction of the individual rules) straightforward.
In final analysis, we can say that independently of how difficult it is to
induce all the extraction rules for a particular source, the list extraction
and iteration rules can be usually learned with a 100% accuracy based
on just a few examples.
7.3. Efficiency Issues
In order to easily compare wien's and stalker's requirements in terms
of the number of training examples, we divided the sources above in
three main groups:
- sources that can be perfectly wrapped by both systems (Table II)
- sources that can be wrapped perfectly only by one system (Tables III
and IV)
- sources on which wien fails completely, while stalker generates
imperfect wrappers (Table V).
For each source that wien can wrap (see Tables II and IV), we
provide two pieces of information: the number of training pages required
by wien to generate a correct wrapper, and the total number of
item occurrences that appear in those pages. The former is taken from
(Kushmerick, 1997) and represents the smallest number of completely
labeled training pages required by one of the six wrapper classes that
can be generated by wien. The latter was obtained by multiplying the
number above by the average number of item occurrences per page,
computed over all available documents.
For each source that stalker wrapped perfectly, we report four
pieces of informations: the minimum, maximum, mean, and median
number of training examples (i.e., item occurrences) that were required
has an accuracy below 100%, the data for the different lists appear on successive
lines (see, for instance, the source S9).
20 Ion Muslea, Steven Minton, Craig A. Knoblock
Table
II. Sources Wrapped Perfectly by Both Systems.
SRC wien stalker
(number of examples)
Docs Exs Min Max Mean Median
S5 2.0 14.4 1.0 3.0 1.5 1.0
S8 2.0 43.6 1.0 2.0 1.2 1.0
S22 2.0 200.0 1.0 1.0 1.0 1.0
5.3 15.9 1.0 9.0 2.4 1.0
to generate a correct rule 9 . For the remaining 8 sources from Tables IV
and V, we present an individual description for each learned rule by
providing the reached accuracy and the required number of training
examples.
By analyzing the data from Table II, we can see that for the
sources that both systems can wrap correctly, stalker requires up to
two orders of magnitude fewer training examples. stalker requires no
more than 9 examples for any rule in these sources, and for more than
half of the rules it can learn perfect rules based on a single example
(similar observations can be made for the four sources from Table III).
9 We present the empirical data for the perfectly wrapped sources in such a
compact format because it is more readable than a huge table that provides detailed
information for each individual rule. Furthermore, as 19 of the 20 sources from
Tables
II and III have a median number of training examples equal to one, it follows
that more than half of the individual item data would read "item X required a single
training example to generate a 100% accurate rule."
Hierarchical Wrapper Induction for Semistructured Information Sources 21
Table
III. Source on which wien fails com-
pletely, while stalker wraps them perfectly.
SRC wien stalker
(number of examples)
Min Max Mean Median
Table
IV. Sources on which wien outperforms stalker.
SRC wien stalker
Docs Exs Task Accuracy Exs
Product 92% 10
Manufacturer 100% 3
As the main bottleneck in wrapper induction consists of labeling the
training data, the advantage of stalker becomes quite obvious.
Table
IV reveals that despite its advantages, stalker may learn
imperfect wrappers for sources that pose no problems to wien. The explanation
is quite simple and is related to the different ways in which the
two systems define a training example: wien's examples are entire doc-
uments, while stalker uses fragments of pages (each parent of an item
paper.tex; 19/11/1999; 16:12; p.
22 Ion Muslea, Steven Minton, Craig A. Knoblock
Table
V. Sources on which wien fails, and stalker wraps imperfectly.
SRC Task Accur. Exs SRC Task Accur. Exs
ListIter 100% 1 ZIP 100% 1
Price 100% 1 Country 100% 1
Airline 100% 1 Phone 100% 1
Flight 100% 1
ArriveCode 100% 2 ListExtr 100% 1
DepartTime 100% 3 ListIter 100% 8
Alt. Name 100% 1
Image 100% 6 Price 97% 10
Translat. Artist 100% 1
Hierarchical Wrapper Induction for Semistructured Information Sources 23
is a fragment of a document). This means that for sources in which each
document contains all possible variations of the main format, wien is
guaranteed to see all possible variations! On the other hand, stalker
has practically no chance of having all these variations in each randomly
chosen training set. Consequently, whenever stalker is trained only
on a few variations, it will generate an imperfect rule. In fact, the
different types of training examples lead to an interesting trade-off: by
using only fragments of documents, stalker may learn perfect rules
based on significantly fewer examples than wien. On the other hand,
there is a risk that stalker may induce imperfect rules; we plan to fix
this problem by using active learning techniques (RayChaudhuri and
Hamey, 1997) to identify all possible types of variations.
Finally, in Table V we provide detailed data about the learned rules
for the six most difficult sources. Besides the problem mentioned above,
which leads to several rules of 99% accuracy, these sources also contain
missing items and items that may appear in various orders. Out of
the 62 rules learned by stalker for these six sources, 42 are perfect
and another 14 have accuracies above 90%. Sources like S6 and S9
emphasize another advantage of the stalker approach: one can label
just a few training examples for the rules that are easier to learn, and
than focus on providing additional examples for the more difficult ones.
7.4. Lessons
Based on the results above, we can draw several important conclusions.
First of all, compared with wien, stalker has the ability to wrap a
larger variety of sources. Even though not all the induced wrappers are
perfect, an imperfect, high accuracy wrapper is to be preferred to no
wrapper at all.
Second, stalker is capable of learning most of the extraction rules
based on just a couple of examples. This is a crucial feature because
from the user's perspective it makes the wrapper induction process
both fast and painless. Our hierarchical approach to wrapper induction
played a key role at reducing the number of examples: on one hand,
we decompose a hard problem into several easier ones, which, in turn,
require fewer examples. On the other hand, by extracting the items
independently of each other, we can label just a few examples for the
items that are easy to extract (as opposed to labeling every single
occurrence of each item in each training page).
Third, by using single-slot rules, we do not allow the harder items to
affect the accuracy of the ones that are easier to extract. Consequently,
even for the most difficult sources, stalker is typically capable of
learning perfect rules for several of the relevant items.
Ion Muslea, Steven Minton, Craig A. Knoblock
Last but not least, the fact that even for the hardest items stalker
usually learns a correct rule (in most of the cases, the lower accuracies
come from averaging correct rules with erroneous ones) means that
we can try to improve stalker's behavior based on active learning
techniques that would allow the algorithm to select the few relevant
cases that would lead to a correct rule.
8. Related Work
Research on learning extraction rules has occurred mainly in two con-
texts: creating wrappers for information agents and developing general
purpose information extraction systems for natural language text. The
former are primarily used for semistructured information sources, and
their extraction rules rely heavily on the regularities in the structure of
the documents; the latter are applied to free text documents and use
extraction patterns that are based on linguistic constraints.
With the increasing interest in accessing Web-based information
sources, a significant number of research projects depend on wrappers
to retrieve the relevant data. A wide variety of languages have been developed
for manually writing wrappers (i.e., where the extraction rules
are written by a human expert), from procedural languages (Atzeni
and Mecca, 1997) and Perl scripts (Cohen, 1998) to pattern matching
(Chawathe et al., 1994) and LL(k) grammars (Chidlovskii et al.,
1997). Even though these systems offer fairly expressive extraction lan-
guages, the manual wrapper generation is a tedious, time consuming
task that requires a high level of expertise; furthermore, the rules have
to be rewritten whenever the sources suffer format changes. In order to
help the users cope with these difficulties, Ashish and Knoblock (Ashish
and Knoblock, 1997) proposed an expert system approach that uses a
fixed set of heuristics of the type "look for bold or italicized strings."
The wrapper induction techniques introduced in wien (Kushmerick,
1997) are a better fit to frequent format changes because they rely on
learning techniques to generate the extraction rules. Compared to the
manual wrapper generation, Kushmerick's approach has the advantage
of dramatically reducing both the time and the effort required to wrap a
source; however, his extraction language is significantly less expressive
than the ones provided by the manual approaches. In fact, the wien
extraction language can be seen as a non-disjunctive stalker rules that
use just a single SkipTo() and do not allow the use of wildcards. There
are several other important differences between stalker and wien.
First, as wien learns the landmarks by searching common prefixes at
the character level, it needs more training examples than stalker.
Hierarchical Wrapper Induction for Semistructured Information Sources 25
Second, wien cannot wrap sources in which some items are missing or
appear in various orders. Last but not least, stalker can handle EC
trees of arbitrary depths, while wien's approach to nested documents
turned out to be impractical: even though Kushmerick was able to
manually write 19 perfect "nested" wrappers, none of them could be
learned by wien.
SoftMealy (Hsu and Dung, 1998) uses a wrapper induction algorithm
that generates extraction rules expressed as finite transducers.
The SoftMealy rules are more general than the wien ones because
they use wildcards and they can handle both missing items and items
appearing in various orders. Intuitively, SoftMealy's rules are similar to
the ones used by stalker, except that each disjunct is either a single
SkipTo() or a SkipTo()SkipUntil() in which the two landmarks must
match immediately after each other. As SoftMealy uses neither multiple
SkipTo()s nor multiple SkipUntil()s, it follows that its extraction
rules are strictly less expressive than stalker's. Finally, SoftMealy
has one additional drawback: in order to deal with missing items and
various orderings of items, SoftMealy may have to see training examples
that include each possible ordering of the items.
In contrast to information agents, most general purpose information
extraction systems are focused on unstructured text, and therefore
the extraction techniques are based on linguistic constraints. However,
there are three such systems that are somewhat related to stalker:
whisk (Soderland, 1999), Rapier (Califf and Mooney, 1999), and
1998). The extraction rules induced by Rapier and srv
can use the landmarks that immediately precede and/or follow the item
to be extracted, while whisk is capable of using multiple landmarks.
But, similarly to stalker and unlike whisk, Rapier and srv extract
a particular item independently of the other relevant items. It follows
that whisk has the same drawback as SoftMealy: in order to handle
correctly missing items and items that appear in various orders, whisk
must see training examples for each possible ordering of the items.
None of these three systems can handle embedded data, though all use
powerful linguistic constraints that are beyond stalker's capabilities.
9. Conclusions and Future Work
The primary contribution of our work is to turn a potentially hard
problem - learning extraction rules - into a problem that is extremely
easy in practice (i.e., typically very few examples are required). The
number of required examples is small because the EC description of
a page simplifies the problem tremendously: as the Web pages are
paper.tex; 19/11/1999; 16:12; p.
26 Ion Muslea, Steven Minton, Craig A. Knoblock
intended to be human readable, the EC structure is generally reflected
by actual landmarks on the page. stalker merely has to find the
landmarks, which are generally in the close proximity of the items to
be extracted. In other words, the extraction rules are typically very
small, and, consequently, they are easy to induce.
We plan to continue our work on several directions. First, we plan to
use unsupervised learning in order to narrow the landmark search-space.
Second, we would like to use active learning techniques to minimize
the amount of labeling that the user has to perform. Third, we plan to
provide PAC-like guarantees for stalker.
Acknowledgments
This work was supported in part by USC's Integrated Media Systems
Center (IMSC) - an NSF Engineering Research Center, by the National
Science Foundation under grant number IRI-9610014, by the U.S. Air
Force under contract number F49620-98-1-0046, by the Defense Logistics
Agency, DARPA, and Fort Huachuca under contract number
DABT63-96-C-0066, and by research grants from NCR and General
Dynamics Information Systems. The views and conclusions contained
in this paper are the authors' and should not be interpreted as representing
the official opinion or policy of any of the above organizations
or any person connected with them.
--R
Journal of Intelligent Systems
--TR
Cut and paste
A Web-based information system that reasons with structured collections of text
Modeling Web sources for information integration
Information extraction from HTML
Generating finite-state transducers for semi-structured data extraction from the Web
Learning Information Extraction Rules for Semi-Structured and Free Text
Relational learning of pattern-match rules for information extraction
Learning Decision Lists
Wrapper Generation for Internet Information Sources
Wrapper induction for information extraction
--CTR
Exploiting structural similarity for effective Web information extraction, Data & Knowledge Engineering, v.60 n.1, p.222-234, January, 2007
Retrieving and Semantically Integrating Heterogeneous Data from the Web, IEEE Intelligent Systems, v.19 n.3, p.72-79, May 2004
Sneha Desai , Craig A. Knoblock , Yao-Yi Chiang , Kandarp Desai , Ching-Chien Chen, Automatically identifying and georeferencing street maps on the web, Proceedings of the 2005 workshop on Geographic information retrieval, November 04-04, 2005, Bremen, Germany
Benjamin Habegger , Mohamed Quafafou, Context Generalization for Information Extraction from the Web, Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, p.720-723, September 20-24, 2004
Craig A. Knoblock , Steven Minton , Jos Luis Ambite , Maria Muslea , Jean Oh , Martin Frank, Mixed-initiative, multi-source information assistants, Proceedings of the 10th international conference on World Wide Web, p.697-707, May 01-05, 2001, Hong Kong, Hong Kong
Sandip Debnath , Prasenjit Mitra , C. Lee Giles, Automatic extraction of informative blocks from webpages, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Fast Detection of XML Structural Similarity, IEEE Transactions on Knowledge and Data Engineering, v.17 n.2, p.160-175, February 2005
Shou-de Lin , Craig A. Knoblock, SERGEANT: A framework for building more flexible web agents by exploiting a search engine, Web Intelligence and Agent System, v.3 n.1, p.1-15, January 2005
Benjamin Habegger , Mohamed Quafafou, Building Web Information Extraction Tasks, Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, p.349-355, September 20-24, 2004
Craig A. Knoblock , Kristina Lerman , Steven Minton , Ion Muslea, Accurately and reliably extracting data from the Web: a machine learning approach, Intelligent exploration of the web, Physica-Verlag GmbH, Heidelberg, Germany,
Juliano Palmieri Lage , Altigran S. da Silva , Paulo B. Golgher , Alberto H. F. Laender, Automatic generation of agents for collecting hidden web pages for data extraction, Data & Knowledge Engineering, v.49 n.2, p.177-196, May 2004
Sergio Flesca , Giuseppe Manco , Elio Masciari , Eugenio Rende , Andrea Tagarelli, Web wrapper induction: a brief survey, AI Communications, v.17 n.2, p.57-61, April 2004
Wai-Yip Lin , Wai Lam, Learning to extract hierarchical information from semi-structured documents, Proceedings of the ninth international conference on Information and knowledge management, p.250-257, November 06-11, 2000, McLean, Virginia, United States
Paul Mulholland , Trevor Collins , Zdenek Zdrahal, Story fountain: intelligent support for story research and exploration, Proceedings of the 9th international conference on Intelligent user interface, January 13-16, 2004, Funchal, Madeira, Portugal
Cokun Bayrak , Hayrettin Koluksaolu , Steve Sieloff, Data Extraction From Repositories On The Web: A Semi-Automatic Approach, Journal of Integrated Design & Process Science, v.7 n.4, p.13-23, December
Shui-Lung Chuang , Jane Yung-jen Hsu, Tree-Structured Template Generation for Web Pages, Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence, p.327-333, September 20-24, 2004
Z. Shi , E. Milios , N. Zincir-Heywood, Post-Supervised Template Induction for Information Extraction from Lists and Tables in Dynamic Web Sources, Journal of Intelligent Information Systems, v.25 n.1, p.69-93, July 2005
Martin Michalowski , Snehal Thakkar , Craig A. Knoblock, Automatically utilizing secondary sources to align information across sources, AI Magazine, v.26 n.1, p.33-44, March 2005
Altigran S. da Silva , Marcos Andr Gonalves , Filipe Mesquita , Edleno S. de Moura, FLUX-CIM: flexible unsupervised extraction of citation metadata, Proceedings of the 2007 conference on Digital libraries, June 18-23, 2007, Vancouver, BC, Canada
Vladimir Kovalev , Sourav S. Bhowmick , Sanjay Madria, HW-STALKER: a machine learning-based system for transforming QURE-Pagelets to XML, Data & Knowledge Engineering, v.54 n.2, p.241-276, August 2005
Denis Shestakov , Sourav S. Bhowmick , Ee-Peng Lim, DEQUE: querying the deep web, Data & Knowledge Engineering, v.52 n.3, p.273-311, March 2005
Zehua Liu , Wee Keong Ng , Ee-Peng Lim , Feifei Li, Towards building logical views of websites, Data & Knowledge Engineering, v.49 n.2, p.197-222, May 2004
Jun Zhu , Zaiqing Nie , Ji-Rong Wen , Bo Zhang , Wei-Ying Ma, Simultaneous record detection and attribute labeling in web data extraction, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Thomas Y. Lee , Yingwei Yang, Constraint-based wrapper specification and verification for cooperative information systems, Information Systems, v.29 n.7, p.617-636, October 2004
Mengchi Liu , Tok Wang Ling, A Conceptual Model and Rule-Based Query Language for HTML, World Wide Web, v.4 n.1-2, p.49-77, 2001
Alberto H. F. Laender , Berthier A. Ribeiro-Neto , Altigran S. da Silva , Juliana S. Teixeira, A brief survey of web data extraction tools, ACM SIGMOD Record, v.31 n.2, June 2002
Julien Carme , Rmi Gilleron , Aurlien Lemay , Joachim Niehren, Interactive learning of node selecting tree transducer, Machine Learning, v.66 n.1, p.33-67, January 2007
Oren Etzioni , Michael Cafarella , Doug Downey , Ana-Maria Popescu , Tal Shaked , Stephen Soderland , Daniel S. Weld , Alexander Yates, Unsupervised named-entity extraction from the web: an experimental study, Artificial Intelligence, v.165 n.1, p.91-134, June 2005
Tak-Lam Wong , Wai Lam, Adapting Web information extraction knowledge via mining site-invariant and site-dependent features, ACM Transactions on Internet Technology (TOIT), v.7 n.1, p.6-es, February 2007
J. Turmo , H. Rodriguez, Learning rules for information extraction, Natural Language Engineering, v.8 n.3, p.167-191, June 2002
Valter Crescenzi , Giansalvatore Mecca, Automatic information extraction from large websites, Journal of the ACM (JACM), v.51 n.5, p.731-779, September 2004
Georgios Sigletos , Georgios Paliouras , Constantine D. Spyropoulos , Michalis Hatzopoulos, Combining Information Extraction Systems Using Voting and Stacked Generalization, The Journal of Machine Learning Research, 6, p.1751-1782, 12/1/2005
Raymond Kosala , Hendrik Blockeel , Maurice Bruynooghe , Jan Van den Bussche, Information extraction from structured documents using k-testable tree automaton inference, Data & Knowledge Engineering, v.58 n.2, p.129-158, August 2006
Alberto H. F. Laender , Berthier Ribeiro-Neto , Altigran S. da Silva, DEByE - Date extraction by example, Data & Knowledge Engineering, v.40 n.2, p.121-154, February 2002
Jordi Turmo , Alicia Ageno , Neus Catal, Adaptive information extraction, ACM Computing Surveys (CSUR), v.38 n.2, p.4-es, 2006 | information agents;information extraction;wrapper induction;supervised inductive learning |
608686 | Reliable Communication for Highly Mobile Agents. | The provision of a reliable communication infrastructure for mobile agents is still an open research issue. The challenge to reliability we address in this work does not come from the possibility of faults, but rather from the mere presence of mobility, which complicates the problem of ensuring the delivery of information even in a fault-free network. For instance, the asynchronous nature of message passing and agent migration may cause situations where messages forever chase a mobile agent that moves frequently from one host to another. Current solutions rely on conventional technologies that either do not provide a solution for the aforementioned problem, because they were not designed with mobility in mind, or enforce continuous connectivity with the message source, which in many cases defeats the very purpose of using mobile agents.In this paper, we propose an algorithm that guarantees delivery to highly mobile agents using a technique similar to a distributed snapshot. A number of enhancements to this basic idea are discussed, which limit the scope of message delivery by allowing dynamic creation of the connectivity graph. Notably, the very structure of our algorithm makes it amenable not only to guarantee message delivery to a specific mobile agent, but also to provide multicast communication to a group of agents, which constitutes another open problem in research on mobile agents. After presenting our algorithm and its properties, we discuss its implementability by analyzing the requirements on the underlying mobile agent platform, and argue about its applicability. | Introduction
Mobile agent systems currently provide an increasing
degree of sophistication in the abstractions and
mechanisms they support, well beyond the purpose of
achieving agent migration. However, it is questionable
whether the features that are being added on top of
plain agent migration are really focused on the needs
of application developers, or they really address the
problems that are peculiar to mobility.
A good example of the gap between what is provided
and what is needed is the problem of providing
a communication infrastructure for mobile agents.
This aspect is often overlooked or misunderstood in the
context of mobile agent research. For instance, significant
efforts are being devoted to the problem of enabling
communication among mobile agents by defining
a common semantic layer for the exchange of in-
formation, as in KQML [6]. Despite their relevance,
the questions posed by researchers in this area are not
particularly affected by the presence of mobility, and
focus on the problem of communication at a completely
different, and much higher, abstraction level than the
one we are concerned with in this paper. Even if we
assume that the problem of ensuring a proper semantic
level for agent communication is somehow solved, we
are still left with the problem of reliably delivering a
message to a mobile agent whose patterns of mobility
are potentially unknown a priori. This is the problem
we address in this paper.
The challenge to reliable communication persists
even under the assumption of an ideal transport mechanism
that guarantees a correct delivery of information
in the presence of faults in the underlying communication
link or in the communicating nodes. It is the
sheer presence of mobility, and not the possibility of
faults, that undermines the notion of reliability. If mobile
agents are allowed to move freely from one host to
another according to some a priori unknown migration
pattern, the challenge in delivering information properly
is caused by the difficulty in determining where the
mobile agent is, and in ensuring that the information
reaches the mobile agent before it moves again.
By and large, currently available mobile agent systems
rely either on conventional communication facilities
like sockets and remote procedure (or method)
call [1, 8, 13], or implement their own message passing
facility [10]. To our knowledge, none of them satisfactorily
addresses the aforementioned problem. They
require knowledge about the location of the mobile
agent, which is obtained either by overly restricting
the freedom of mobility or by assuming continuous
connectivity-assumptions that in many cases defeat
the whole purpose of using mobile agents.
In this paper, we propose an algorithm that guarantees
message delivery to highly mobile agents in a
fault-free network. We focus on message passing as the
communication mechanism that we want to adapt to
mobility, because it is a fundamental and well understood
form of communication in a distributed system.
This incurs no loss of generality because more complex
mechanisms like remote procedure call and method invocation
are easily built on top of message passing. Our
algorithm does not assume knowledge about the location
of agents, and constrains the movement of agents
only in its most enhanced form and only for a limited
amount of time. Furthermore, its structure makes it inherently
amenable to an extension that provides multicast
communication to a group of agents dispersed
in the network, another problem for which satisfactory
solutions do not yet exist.
The paper is structured as follows. Section 2 discusses
the motivation for this work, and the current
state of the art in the field. Section 3 presents our al-
gorithm, starting with the underlying assumptions and
illustrating subsequent refinements of the original key
idea. Section 4 discusses the applicability and implementability
of a communication mechanism embodying
our algorithm in a mobile agent platform. Finally, Section
5 provides some concluding remarks.
2. Motivation and Related Work
The typical use of a mobile agent paradigm is for
bypassing a communication link and exploiting local
access to resources on a remote server [7]. Thus, one
could argue that, all in all, communication with a remote
agent is not important and a mobile agent platform
should focus instead on the communication mechanisms
that are exploited locally, i.e., to get access
to the server or to communicate with the agents that
are co-located on the same site. Many mobile agent
systems provide mechanisms for local communication,
either using some sort of meeting abstraction as initially
proposed by Telescript [18], event notification for
group communication [1, 10], or, more recently, tuple
spaces [4, 16].
Nevertheless, there are several common scenarios
that provide counterarguments to the statement above.
Some of them are related with the issue of managing
mobile agents. Imagine a "master" agent spawning a
number of "slave" mobile agents that are subsequently
injected in the network to perform some kind of co-operative
computation, e.g., find a piece of informa-
tion. At some point, the master agent may want to
actively terminate the computation of the slave agents,
e.g., because the requested information has been found
by one of them and thus is desirable to prevent un-necessary
resource consumption. Or, it may want to
change some parameter governing the behavior of the
agents, because the context that determined their creation
has changed in the meanwhile. Or, in turn, the
slave agents may want to detect whether the master
agent is still alive by performing some sort of orphan
detection, which requires locating the master agent if
this is itself allowed to be mobile.
Other examples are related to the fact that mobile
agents are just one of the paradigms available to designers
of a distributed application, which can then use a
mixture of mobile agent and message passing to achieve
different functionalities in the context of the same ap-
plication. For instance, a mobile agent could visit a
site and perform a check on a given condition. If the
condition is not satisfied, the agent could register an
event listener with the site. This way, while the mobile
agent is visiting other sites and before reporting
its results, it could receive notifications of changes in
the state of the sites it has already visited and decide
whether they are worth a second visit.
The scenarios above require the presence of a message
passing mechanism for mobile agents. However, a
highly desirable requirement for such a mechanism is
the guarantee that the message is actually delivered (at
least once) to the destination, independently from the
relative movement of the source and target of communi-
cation. Mobility heavily complicates matters. Typical
delivery schemes suffer from the fundamental problem
that an agent in transit during the delivery can easily
be missed. To illustrate the issue, we discuss two
strawman approaches to message delivery: broadcast
and forwarding.
A simple broadcast scheme assumes a spanning tree
of the network nodes through which a message may
be sent by any node. This node then broadcasts the
message to its neighbors, which broadcast the message
to their neighbors, and so on until the leaf nodes are
reached. This, however, does not guarantee delivery
of the message when an agent is traveling in the reverse
direction with respect to the propagation of the
message, as depicted in Figure 1. If the agent is be-
(b) Forwarding.
(a) Spanning tree broadcasting.
sender
agent
message
sender
home agent
A
A
A
retransmission
A
Figure
1. The problem: Missing delivery in simplistic
broadcast and forwarding schemes.
ing transferred at the same instant when the message
is propagating in the other direction, the agent and
the message will cross in the channel, and delivery will
never occur.
A simple forwarding scheme maintains a pointer to
the mobile agent at a well-known location, which is
called home agent in the Mobile IP protocol [14] where
this idea enables physical mobility of hosts. Upon mi-
gration, the mobile agent must inform the home agent
of its new location, in order to enable further com-
munication. However, any messages sent between the
migration and the update are lost, as the agent basically
ran away from the messages before they could
be delivered. Even if retransmission to the new location
is attempted, the agent can move again and miss
the retransmission, thus effectively preventing guaranteed
delivery, as depicted in Figure 1. Furthermore,
forwarding has an additional drawback in that it requires
communication to the home agent every time
the agent moves. In some situations, this may defeat
the purpose of using mobile agents by reintroducing
centralization. For instance, in the presence of many
highly mobile agents spawned from the same host, this
scheme may lead to a considerable traffic overhead generated
around the home agent, and possibly to much
slower performance if the latency between mobile and
home agent is high. Finally, because of this umbilical
cord that must be maintained with the home agent,
this approach is intrinsically difficult to apply when
disconnected operations are required.
The mobile agent systems currently available employ
different communication strategies. The OMG
MASIF standard [11] specifies only the interfaces that
enable the naming and locating of agents across different
platforms. The actual mechanisms to locate an
agent and communicate with it are intentionally left
out of the scope of the standard, although a number of
location techniques are suggested that by and large can
be regarded as variations of broadcast and forwarding.
Some systems, notably Aglets [10] and Voyager [13],
employ forwarding by associating to each mobile component
a proxy object which plays the role of the home
agent. Some others, like Emerald [9], the precursor
of mobile objects, use forwarding and resort to broadcast
when the object cannot be found. Others, e.g.,
Mole [1], simply prevent the movement of a mobile
agent while engaged in communication. Mole exploits
also a different forwarding scheme that does not keep
a single home agent, rather it maintains a whole trail
of pointers from the source to destination, for faster
communication. However, this is employed only in the
context of a protocol for orphan detection [2]. Finally,
some systems, e.g., Agent Tcl [8], provide mechanisms
that are based on common remote procedure call, and
leave to the application developer the chore of handling
a missed delivery.
A related subject is the provision of a mechanism for
reliable communication to a group of mobile agents.
Group communication is a useful programming abstraction
for dealing with clusters of mobile agents that
are functionally related and to which a same piece of information
must be sent. Many mobile agent systems,
notably Telescript, Aglets, and Voyager, provide the
capability to multicast messages only within the context
of a single runtime support. Finally, Mole [1] provides
a mechanism for group communication that, how-
ever, still assumes that agents are stationary during a
set of information exchanges.
The approach we propose provides a reasonable solution
to the problem of guaranteeing delivery to a single
mobile agent, and has the nice side effect of providing
a straightforward way to achieve group communication
as well. The details of our algorithm are discussed in
the next section.
3. Enabling Reliable Communication
As discussed earlier, simplistic message delivery
mechanisms such as spanning tree broadcasting and
forwarding have the potential for failure when agents
are in transit or are rapidly moving. To address these
shortcomings we note that, in general, we must flush
the agents out of the channels and into regions where
they cannot escape without receiving a copy of the
message. For instance, in the aforementioned broadcast
mechanism, we look at the case where the agent
is moving in the opposite direction from the message
on a bidirectional channel. In this case, if the message
was still present at the destination node of the chan-
nel, it could be delivered when the agent arrived at the
node. This leads to a solution where the message is
stored at the nodes until delivery completes. Although
this simple extension would guarantee delivery, it is
not reasonable to expect the nodes to store messages
for arbitrary lengths of time. Therefore, we seek a solution
that has a tight bound on the storage time for
any given message at a node. We must also address the
situation where a message is continually forwarded to
the new location of the mobile agent, but never reaches
it because the agent effectively is running away and the
message never catches up. Again, we could store the
message at every node in the network until it was de-
livered, but a better solution would involve trapping
the agent in a region of the graph so that wherever it
moves, it cannot avoid receiving the message.
The first algorithm we present for guaranteed message
delivery to mobile agents is a direct adaptation
of previous work done by the first author in the area
of physical mobility [12]. This work assumes that the
network of nodes and channels is known in advance,
and further assumes that only one message is present
in the system at a time. In this setting, exactly-once
delivery of the message is guaranteed without modifying
the agents behavior either with respect to movement
or message acceptance. Next, we extend this basic
algorithm to allow multiple messages to be delivered
concurrently. To achieve this enhancement we must relax
the exactly-once semantics to become at-least-once,
meaning that duplication of messages is acceptable, but
we still prevent an agent from missing a message.
Although these algorithms provide reliable message
delivery, the assumption that the entire network graph
is known in advance is often unreasonable in situations
where mobile agents are used. Therefore, we enhance
our algorithm by allowing the graph to grow dynamically
as agents move, and still preserve the at-least-once
semantics for message delivery. For simplicity of pre-
sentation, we will present this latter enhancement in
two stages: first assuming that all messages originate
from a single node and then allowing any node to initiate
the processing needed to send a message.
3.1. Model
The logical model we work with is the typical net-work
graph where the nodes represent the nodes willing
to host agents and the edges represent directional,
FIFO channels along which agents can migrate and
messages can be passed. The FIFO assumption is critical
to the proper execution of our algorithm and its
implications on the underlying mobile agent platform
are discussed further in Section 4. We assume a con-
Figure
2. A connected network with connected sub-
networks. Agents can enter and leave the subnetworks
only by going through the gateway servers.
nected network graph (i.e., a path exists between every
pair of nodes), but not necessarily fully connected (i.e.,
a channel does not necessarily exist between each pair
of nodes). In a typical IP network, all nodes are logically
connected directly. However, this is not always
the case at the application level, as shown in Figure 2.
There, a set of subnetworks are connected to one another
through an IP network, but an agent can enter or
leave a subnetwork only by passing through a gateway
server, e.g., because of security reasons.
We also assume that the mobile agent server keeps
track of which agents it is currently hosting, and that
it provides some fundamental mechanism to deliver a
message to an agent, e.g., by invoking a method of the
agent object. Finally, we assume that every agent has a
single, globally unique identifier, which can be used to
direct a message to the agent. These latter assumptions
are reasonable in that they are already satisfied by the
majority of mobile agent platforms.
3.2. Delivery in a Static Network Graph
We begin the description of our solution with a basic
algorithm which assumes a fixed network of nodes.
For simplicity, we describe first the behavior of the algorithm
under the unrealistic assumption of a single
message being present in the system, and then show
how this result can be extended to allow concurrent
delivery of multiple messages.
Single message delivery. Previous work by the
first author in the physical mobility environment approached
reliable message delivery by adapting the notion
of distributed snapshots [12]. In snapshot algo-
rithms, the goal is to record the local state of the nodes
and the channels in order to construct a consistent
global state. Critical features of these algorithms include
propagation of the snapshot initiation, the flush-
1: pre: no incoming channels open
action:
2:
pre: message j arrives "
action: if
3: pre: message j finished processing
action:
4: pre: message i arrives "
action: buffer message i
OPEN
Figure
3. State transitions and related diagram for
multiple message delivery in a static network graph.
ing of the channels to record all messages in transit, and
the recording of every message exactly once. Our approach
to message delivery uses many of the same ideas
as the original snapshot paper presented by Chandy
and Lamport [5]. However, instead of spreading knowledge
of the snapshot using messages, we spread the actual
message to be delivered; instead of flushing messages
out of the channels, we flush agents out of the
channels; and instead of recording the existence of the
messages, we deliver a copy of the message.
The algorithm works by associating a state,
flushed or open, with each incoming channel of a
node. Initially all channels are open. When the
message arrives on a channel, the state is changed to
flushed, implying that all the agents on that channel
ahead of the message have been forced out of the
channel (by the FIFO assumption). When the message
arrives for the first time at a node, it is stored locally
and propagated on all outgoing channels, starting the
flushing process on those channels. The message is
also delivered to all agents present at the node. All the
agents that arrive through an open channel on a node
storing the message must receive a copy of it. When
all the incoming channels of a node are flushed, the
node is no longer required to deliver the message to any
arriving agents, therefore the message copy is deleted
and all of the channels are atomically reset to open.
Multiple message delivery. A simplistic adaptation
of the previous algorithm to multiple message delivery
would require a node to wait for the termination
of the current message delivery and to coordinate with
the other nodes before initiating a new one, in order to
ensure that only one message is present in the system.
However, this unnecessarily constrains the behavior of
the sender and requires knowledge of non-local state.
We propose instead an approach where multiple
messages can be present in the system, as long as the
node where the message originates tags the message
with a sequence number unique to the node. In prac-
tice, the sequence number allows the nodes to deal with
multiple instantiations of the algorithm running con-
currently, thus encompassing the case of a single source
transmitting a burst of messages without waiting as
well as the case of multiple sources transmitting at the
same time.
To allow concurrent message delivery to take place,
we must address the issue of a new message arriving
during the processing of the current one. In this case,
the channel is already flushed, but not all other channels
are flushed. To handle this case, we introduce a
new state, buffering, as shown in Figure 3, in which
any messages arriving on a flushed channel are put
into a buffer to be processed at a later time (transition
4). A channel in the buffering state is not considered
when determining the transition from flushed
to open. When this transition is finally made (1), all
buffering channels are also transitioned to open (3),
and the messages in the buffer queues are treated as
if they were messages arriving on the channel at that
moment, and thus processed again. It is possible that,
after the processing of the first message, the next message
causes another transition to buffering, but the
fact that the head of the channel is processed ensures
eventual progress through the sequence of messages to
be delivered.
Although we force messages to be buffered, agent
arrival is not restricted. The agent is being allowed to
move ahead of any messages it originally followed along
the channel. Effectively, the agent may move itself back
into the region of the network where the message has
not yet been delivered. Therefore, duplicate delivery
is possible, although duplicates can be discarded easily
by the runtime support or by the agent itself based on
the sequence number.
3.3. Delivery in a Dynamic Network Graph
Although the solutions proposed so far provide delivery
guarantees in the presence of mobility, the necessity
of knowing the network of neighbors a priori is
sometimes unreasonable in the dynamic environment of
mobile agents. Furthermore, the delivery mechanism is
insensitive to which nodes have been active, and delivers
the messages also to regions of the network that
have not been visited by agents. Therefore, our goal
is still to flush channels and trap agents in regions of
the network where the messages will propagate, but
also to allow the network graph used for the delivery
process to grow dynamically as the agents migrate. A
channel will only be included in the message delivery
if an agent has traversed it, and therefore, a node will
be included in the message delivery only if an agent
has been hosted there. We refer to a node or channel
included in message delivery as active.
Our presentation is organized in two phases. First,
we show a restricted approach where all the messages
must originate from a single, fixed source. This is reasonable
for monitoring or master-slave scenarios where
all communication flows from a fixed initiator to the
agents in the system. Then, we extend this initial solution
to enable direct inter-agent messaging by allowing
any node to send messages, without the need for a
centralized source.
Single message source. First, we identify the problems
that can arise when nodes and channels are added
dynamically, due to the possible disparity between the
messages processed at the source and destination nodes
of a channel when it becomes active. We initially
present these issues by example, then develop a general
solution.
Destination ahead of source. Assume a network as
shown in Figure 4(a). X is the sender of all messages
and is initially the only active node in the system. The
graph is extended when X sends an agent to Y , causing
Y and (X; Y ) to become active. Suppose X sends a
burst of messages 1::4, which are processed by Y , and
later a second sequence of messages 5::8. This second
transfer is immediately followed by the migration of a
new agent to node Z, which makes Z and (X; Z) active.
Before message 5 arrives at Y , an agent is sent from Y
to Z, thus causing the channel (Y; Z) to be added to
the active graph.
A problem arises if the agent decides to immediately
leave Z, because the messages 5::8 have not yet been
delivered to it. Furthermore, what processing should
occur when these messages arrive at Z along the new
channel (Y; Z)? If the messages are blindly forwarded
on all Z's outgoing channels, message ordering is possibly
lost and messages can possibly keep propagating
in the network without ever being deleted.
Our solution is to hold the agent at Z until the messages
5::8 are received and, when these messages ar-
rive, to deliver them only to the detained agent, i.e.,
without broadcasting them to the neighboring nodes.
Therefore, no messages are lost and the system wide
processing of messages is not affected. Notably, although
we do inhibit the movement of the agent until
these messages arrive, this takes place only for a time
proportional to the diameter of the network, and even
more important, only when the topology of the network
is changing.
Source ahead of destination. To uncover another
potential problem, we use the same scenario just presented
for nodes X , Y , and Z, except that instead of
assuming an agent moving from Y to Z, we assume
it is moving from Z to Y , making (Z; Y ) active
ure 4(b)). Although the agent will not miss any messages
in this move, two potential problems exist.
First, by making (Z; Y ) active, Y will wait for Z
to be flushed or buffering before proceeding to the
next message. However, message 5 will never be sent
from Z. Our solution is to delay the activation of channel
catches up with Z. In this example,
we delay until 8 is processed at Y . Second, if message
9 is sent from X and propagated along channel (Z; Y ),
it must be buffered until it can be processed in order.
Given this, we now present a solution that generalizes
the previous one. We describe in detail the channel
states and the critical transitions among these states,
using the state diagram in Figure 5.
ffl closed: Initially, all channels are closed and not
active in message delivery.
ffl open: The channel is waiting to participate in a
message delivery. When an agent arrives through
an open channel on a node that is storing a message
destined to that agent, the agent should receive
a copy of such message.
ffl flushed: The current message being delivered
has already arrived on this channel, and therefore
this channel has completed the current message
delivery. Agents arriving on flushed channels
need no special processing.
of source. of destination.
(b) Source ahead
(a) Destination ahead
Z
a 8
a
Figure
4. Problems in managing a dynamic graph.
Values shown inside the nodes indicate the last message
processed by the node. The subscripts on agent
a indicate the last message processed by the source
of the channel being traversed by a right before a
migrated.
ffl buffering(j): The source is ahead of the desti-
nation. Messages arriving on buffering channels
are put into a FIFO buffer. They are processed
after the node catches up with the source by processing
message j.
ffl holding(j): The destination is ahead of the
source. Messages with identifiers less than or equal
to j which arrive on holding channels are delivered
to all held agents. Agents arriving on holding
channels, and whose last received message has
identifier less than j, are held until j arrives.
The initial transition of a channel from closed to
an active state is based on the current state of the destination
node and on the state of the source as carried
by the agent. The destination node can either still
be inactive or it can have finished delivering the same
message as the source (9), it can still be still processing
such message (8), it can be processing an earlier
message (10), or it can be processing a later message
(7). Based on this comparison, the new active state is
assigned. Once a channel is active, all state transitions
occur in response to the arrival of a message. Because
we have already taken measures to ensure that all messages
will be delivered to all agents, our remaining concerns
are that detained agents are eventually released
and that at every node, the next message is eventually
processed.
Whether an agent must be detained or not is determined
by comparing the identifier of the latest message
received by the agent, carried as part of the agent
state, and the current state of the destination node.
Only agents that are behind the destination are actually
detained. If an agent is detained at a channel in
state holding(j), it can be released as soon as j is
processed along this channel. By connectivity of the
network graph, we are guaranteed that j will eventually
arrive. When it does, the destination node will
either still be processing j, or will have completed the
processing. In both cases the agent is released. In the
former case, the channel transitions to flushed (6) to
wait for the rest of the channels to catch up, while in
the latter case the channel transitions to open (5) to
be ready to process the next message.
To argue that eventually all messages are delivered,
we must extend the progress argument presented in
Section 3.2 to include the progress of the holding
channels as well as the addition of new channels. As
noted in the previous paragraph, message j is guaranteed
to eventually arrive along the holding channel,
thus ensuring progress of this channel. Next, we assert
that there is a maximum number of channels that can
be added as incoming channels, bounded by the num-
1:
pre: no incoming channels open "
no incoming channels holding
action:
2:
pre: message j arrives "
action: if
3: pre: message j finished processing
action:
4:
pre: message i arrives "
action: buffer message i
5:
pre: message j arrives "
action: deliver to held agents,
release held agents
pre: message j arrives "
action: deliver to held agents,
release held agents
7:
pre: agent arrives " D ahead of S "
action:
8:
pre: agent arrives " curMsg
S and D processing same message
action:
9:
pre: agent arrives " (D not active
(S and D processing same message "
action:
10: pre: agent a j
arrives " S ahead of D
action:
OPEN
FLUSHED
CLOSED
Figure
5. State transitions and related diagram for
multiple message delivery with a single source in a
dynamic network graph. The state transitions refer
to a single channel (S; D).
ber of nodes in the system. We are guaranteed that if
channels are continuously added, eventually this maximum
will be reached. By the other progress properties,
eventually all these channels will be either flushed or
buffering, in which case processing of the next message
(if any) can begin.
Multiple message sources. Although the previous
solution guarantees message delivery and allows the dynamic
expansion of the graph, the assumption that all
messages originate at the same node is overly restric-
tive. To extend this algorithm to allow a message to
originate at any node, we effectively superimpose multiple
instances of the same algorithm on the network,
by allowing their concurrent execution. For the purposes
of explanation, let n be the number of nodes in
the system. Then:
ffl The state of an incoming channel is represented
by a vector of size n where the state of each node
is recorded. Before the channel is added to the
active graph, the channel is considered closed.
Once the channel is active, if no messages have
been received from a particular node, the state of
the element in the vector corresponding to that
node is set to open.
ffl Processing of each message is done with respect to
the channel state associated with the node where
the message originated.
ffl Nodes can deliver n messages concurrently, at
most one for each node. As before, if a second
message arrives from the same node, it is buffered
until the prior message completes its processing.
ffl An agent always carries a vector containing, for
each message source, the identifier of the last message
received. Moreover, when an agent traverses
a new outgoing channel, it carries another vector
that contains, for each message source, the identifier
of the last message processed by the source of
the new channel right before the agent departed.
ffl An incoming agent is held only as long as, for each
message source, the identifier of the last message
received is greater than the corresponding holding
value (if any) of the channel the agent arrived
through.
ffl To enable any node to originate a message, we
must guarantee that the graph remains connected.
To maintain this property we make all links bidi-
rectional. In the case where an agent arrives and
the channel in the opposite direction is not already
an outgoing channel, a fake agent message is sent
to S with the state information of D. This message
effectively makes the reverse channel active.
Again we must argue that detained agents are eventually
released and that progress is made with respect
to the messages sent from each node. Assume that message
i is the smallest message identifier from any node
which has not been delivered by all nodes. There must
exist a path from a copy of i to every node where i has
not arrived, and every node on this path is blocked until
arrives. By connectivity of the network graph, i will
propagate to every node along every channel and will
complete delivery in the system. No node will buffer
because it is the minimum message identifier which
is being waited for. When i has completed delivery,
the next message is the new minimum and will make
progress in a similar manner. Because the buffering of
messages is done with respect to the individual source
nodes and not for the channel as a whole, the messages
from each node make independent progress.
Holding agents requires coordination among the
nodes. The j value with respect to each node for which
the channel is being held, e.g., holding(j), is fixed
when the first agent arrives. Because the messages are
guaranteed to make progress, we are guaranteed that
eventually j will be processed and the detained agents
will be released.
3.4. Multicast Message Delivery
In all the algorithms described so far, we exploited
the fact that a distributed snapshot records the state of
each node exactly once, and modified the algorithm by
substituting message recording with message delivery
to an agent. Hence, one could describe our algorithm
by saying that it attempts to deliver a message to every
agent in the system, and only the agents whose identifier
match the message target actually accept the mes-
sage. With this view in mind, the solution presented
can be adapted straightforwardly to support multicast.
The only modification that must be introduced is the
notion of a multicast address that allows a group of
agents to be specified as recipients of the message-no
modification to the algorithm is needed.
4. Discussion and Future Work
In this section we analyze the impact of our communication
mechanism on the underlying mobile agent
platform, argue about its applicability, and discuss possible
extensions and future work on the topic.
4.1. Implementation Issues
A fundamental assumption that must be preserved
in order for our mechanism to work is that the communication
channels must be FIFO-a legacy of the fact
that the core of our schema is based on a distributed
global snapshot. The FIFO property must be maintained
for every piece of information traveling through
the channel, i.e., messages, agents, and any combination
of the two. This is not necessarily a requirement
for a mobile agent platform. A common design for
it is to map the operations that require message or
agent delivery on data transfers taking place on different
data streams, typically through sockets or some
higher-level mechanism like remote method invocation.
In the case where these operations insist on the same
destination, the FIFO property may not be preserved,
since a data item sent first through one stream can be
received later than another data item through another
stream, depending on the architecture of the underlying
runtime support. Nevertheless, the FIFO property
can be implemented straightforwardly in a mobile
agent server by associating a queue that contains messages
and agents that must be transmitted to a remote
server. This way, the FIFO property is structurally
enforced by the server architecture, although this may
require non-trivial modifications in the case of an already
existing platform.
Our mechanism assumes that the runtime support
maintains some state about the network graph and the
messages being exchanged. In the most static form
of our solution, this state is constituted only by the
last message received, which must be kept until de-
livered. In a system with bidirectional channels, this
means for a time equal to the maximum round trip delay
between the node and its neighbors. On the other
hand, in the most dynamic variant of our algorithm,
each server must maintain a vector of identifiers for
the active (outgoing and incoming) channels and, for
each channel, a vector containing the messages possibly
being buffered. The size of the latter is unbounded,
but each message must be kept in the vector only for
a time proportional to the diameter of the network.
4.2. Applicability
It is evident that the algorithm presented in this
work generates a considerable overall traffic overhead
if compared, for instance, to a forwarding scheme. This
is a consequence of the fact that our technique involves
contacting the nodes in the network that have been
visited by at least one agent in order to find the message
recipient, and thus generates an amount of traffic
that is comparable to a broadcast. Unfortunately, this
price must be paid when both guaranteed delivery and
frequent, unconstrained agent movement are part of
the application requirements, since simpler and more
lightweight schemes do not provide these guarantees,
as discussed in Section 2. Hence, the question whether
the communication mechanism we propose is a useful
addition to mobile agent platforms will be ultimately
answered by practical mobile agent applications, which
are still largely missing and will determine the requirements
for communication.
In any case, we do not expect our mechanism to
be the only one provided by the runtime support. To
make an analogy, one does not shout when the party is
one step away; one resorts to shouting under the exceptional
condition that the party is not available, or not
where expected to be. Our algorithm provides a clever
way to shout (i.e., to broadcast a message) with precise
guarantees and minimal constraints, and should
be used only when conventional mechanisms are not
applicable. Hence, the runtime support should leave
to the programmer the opportunity to choose different
communication mechanisms, and even different variants
of our algorithm. For instance, the fully dynamic
solution described in Section 3.3 is not necessarily the
most convenient in all situations. In a network configuration
such as the one depicted in Figure 2, where
the graph is structured in clusters of nodes, the best
tradeoff is probably achieved by using our fully dynamic
algorithm only for the "gateway" servers that
sit at the border of each cluster, and a static algorithm
within each cluster, thus leveraging off of the knowledge
of the internal network configuration. Along the
same lines, it should also be possible to exploit hybrid
schemes. For instance, in the common case where the
receipt of a message triggers a reply, bandwidth consumption
can be reduced by encoding the reply destination
in the initial message and using a conventional
mechanism, as long as the sending agent is willing to
remain stationary until the reply is received.
4.3. Enhancements and Future Work
In this work, we argued that the problem of reliable
message delivery is inherently complicated by the
presence of mobility even in the absence of faults in
the links or nodes involved in the communication. In
practice, however, these faults do happen and, depending
on the execution context, they can be relevant. If
this is the case, the techniques traditionally proposed
for coping with faults in a distributed snapshot can be
applied to our mechanism. For instance, a simple technique
consists of periodically checkpointing the state of
the system, recording the state of links, keeping track of
the last snapshot, and dumping an image of the agents
hosted. (Many systems already provide checkpointing
mechanisms for mobile agents.) This information can
be used to reconcile the state of the faulty node with
the neighbors after a fault has occurred.
A related issue is the ability not only to dynamically
add nodes to the graph, but also to remove them.
This could model faults, or model the fact that a given
node is no longer willing to host agents, e.g., because
the mobile agent support has been intentionally shut
down. A simple solution would consist of "short cir-
cuiting" the node to be removed, by setting the in-coming
channels of its outgoing neighbors to point to
the node's incoming neighbors. However, this involves
running a distributed transaction and thus enforces an
undesirable level of complexity. In this work, we disregarded
the problem for a couple of reasons. First of
all, while it is evident that the ability of adding nodes
dynamically enables a better use of the communication
resources by limiting communication to the areas effectively
visited by agents, it is unclear whether a similar
gain is obtained in the case of removing nodes, especially
considering the aforementioned implementation
complexity. Second, very few mobile agent systems
provide the ability to start and stop dynamically the
mobile agent runtime support: most of them assume
that the runtime is started offline and operates until
the mobile agent application terminates.
We are currently designing and implementing a communication
package based on the algorithm described
in this paper, to be included in the Code [15] mobile
code toolkit. The goal of this activity is to gain
a hands-on understanding of the design and implementation
issues concerned with the realization of our
scheme, and to provide the basis for a precise quantitative
characterization of our approach, especially in
comparison with traditional ones.
5. Conclusions
In this work we point out how the sheer presence
of mobility makes the problem of guaranteeing the
delivery of a message to a mobile agent inherently
difficult, even in absence of faults in the network. To
our knowledge, this problem has not been addressed
by the research community. Currently available mobile
agent systems employ techniques that either do not
provide guarantees, or overly constrain the movement
or connectivity of mobile agents, thus to some extent
reducing their usefulness. In this work, we propose
a solution based on the concept of a distributed
snapshot. Several extensions of the basic idea allow
us to cope with different levels of dynamicity and,
along the way, provide a straightforward way to implement
group communication for mobile agents. Our
communication mechanism is meant to complement
those currently provided by mobile agent systems,
thus allowing the programmer to trade reliability for
bandwith consumption. Further work will address
fault tolerance and exploit an implementation of our
mechanism to evaluate its tradeoffs against those of
conventional mechanisms.
Acknowledgments
This paper is based upon work
supported in part by the National Science Foundation
(NSF) under grant No. CCR-9624815. Any opinions,
findings and conclusions or recommendations expressed
in this paper are those of the authors and do not necessarily
reflect the views of NSF.
--R
Communication Concepts for Mobile Agent Systems.
The Shadow Ap- proach: An Orphan Detection Protocol for Mobile Agents
Software Agents.
Reactive Tuple Spaces for Mobile Agent Coordination.
Distributed Snap- shots: Determining Global States of Distributed Sys- tems
KQML as an agent communication language.
Understanding Code Mobility.
Agent Tcl.
Programming and Deploying Mobile Agents with Aglets.
An exercise in formal reasoning about mobile computations.
ObjectSpace Inc.
IP mobility support.
Lime: Linda Meets Mobility.
Mobile Agents: 2 nd Int.
Telescript Technology: Mobile Agents.
--TR
--CTR
Mosaab Daoud , Qusay H. Mahmoud, Reliability analysis of mobile agent-based systems, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Scalable Platform for Highly Mobile Agents in Distributed Computing Environments, Proceedings of the 2006 International Symposium on on World of Wireless, Mobile and Multimedia Networks, p.633-637, June 26-29, 2006
Elena Gmez-Martnez , Sergio Ilarri , Jos Merseguer, Performance analysis of mobile agents tracking, Proceedings of the 6th international workshop on Software and performance, February 05-08, 2007, Buenes Aires, Argentina
Hojjat Jafarpour , Nasser Yazdani , Navid Bazzaz-zadeh, A scalable group communication mechanism for mobile agents, Journal of Network and Computer Applications, v.30 n.1, p.186-208, January 2007 | communication;snapshot;mobile agents |
608696 | Pricing in Agent Economies Using Multi-Agent Q-Learning. | This paper investigates how adaptive software agents may utilize reinforcement learning algorithms such as Q-learning to make economic decisions such as setting prices in a competitive marketplace. For a single adaptive agent facing fixed-strategy opponents, ordinary Q-learning is guaranteed to find the optimal policy. However, for a population of agents each trying to adapt in the presence of other adaptive agents, the problem becomes non-stationary and history dependent, and it is not known whether any global convergence will be obtained, and if so, whether such solutions will be optimal. In this paper, we study simultaneous Q-learning by two competing seller agents in three moderately realistic economic models. This is the simplest case in which interesting multi-agent phenomena can occur, and the state space is small enough so that lookup tables can be used to represent the Q-functions. We find that, despite the lack of theoretical guarantees, simultaneous convergence to self-consistent optimal solutions is obtained in each model, at least for small values of the discount parameter. In some cases, exact or approximate convergence is also found even at large discount parameters. We show how the Q-derived policies increase profitability and damp out or eliminate cyclic price wars compared to simpler policies based on zero lookahead or short-term lookahead. In one of the models (the Shopbot model) where the sellers' profit functions are symmetric, we find that Q-learning can produce either symmetric or broken-symmetry policies, depending on the discount parameter and on initial conditions. | Introduction
Reinforcement Learning (RL) procedures have been established as powerful and
practical methods for solving Markov Decision Problems. One of the most significant
and actively investigated RL algorithms is Q-learning (Watkins, 1989).
Q-learning is an algorithm for learning to estimate the long-term expected reward
for a given state-action pair. It has the nice property that it does not need
a model of the environment, and it can be used for on-line learning. Strong convergence
of Q-learning to the exact optimal value functions and policies has been
proven when lookup table representations of the Q-function are used (Watkins
and Dayan, 1992); this is feasible in small state spaces. In large state spaces
where lookup tables are infeasible, RL methods can be combined with function
approximators to give good practical performance despite the lack of theoretical
guarantees of convergence to optimal policies.
Most real-world problems are not fully Markov in nature - they are often
non-stationary, history-dependent and/or not fully observable. In order for RL
methods to be more generally useful in solving such problems, they need to be
extended to handle these non-Markovian properties. One important application
domain where the non-Markovian aspects are paramount is the area of multi-agent
systems. This area is expected to be increasingly important in the future,
due to the potential rapid emergence of "agent economies" consisting of large
populations of interacting software agents engaged in various forms of economic
activity. The problem of multiple agents simultaneously adapting is in general
non-Markov, because each agent provides an effectively non-stationary environment
for the other agents. Hence the existing convergence guarantees do not
hold, and in general, it is not known whether any global convergence will be
obtained, and if so, whether such solutions are optimal.
Some progress has been made in analyzing certain special case multi-agent
problems. For example, the problem of "teams," where all agents share a common
objective function, has been studied, for example, in (Crites and Barto,
1996). Likewise, the purely competitive case of zero-sum objective functions has
been studied in (Littman, 1994), where an algorithm called "minimax-Q" was
proposed for two-player zero-sum games, and shown to converge to the optimal
value function and policies for both players. Sandholm and Crites studied simultaneous
Q-learning by two players in the Iterated Prisoner's Dilemma game
(Sandholm and Crites, 1995), and found that the learning procedure generally
converged to stationary solutions. However, the extent to which those solutions
were "optimal" was unclear. Recently, Hu and Wellman proposed an algorithm
for calculating optimal Q-functions in two-player arbitrary-sum games (Hu and
Wellman, 1998). This algorithm is an important first step. However, it does not
yet appear to be useable for practical problems, because it assumes that policies
followed by both players will be Nash equilibrium policies, and it does not
address the "equilibrium coordination" problem, i.e. if there are multiple Nash
equilibria, how do the agents decide which equilibrium to choose? We suspect
that this may be a serious problem, since according to the "folk theorem of iterated
games" (Kreps, 1990), there can be a proliferation of Nash equilibria when
there is sufficiently high emphasis on future rewards, i.e., a large value of the
discount parameter fl. Furthermore, there may be inconsistencies between the
assumed Nash policies, and the policies implied by the Q-functions calculated
by the algorithm.
In this paper, we study simultaneous Q-learning in an economically motivated
two-player game. The players are assumed to be two sellers of similar or
identical products, who compete against each other on the basis of price. At each
time step, the sellers alternately take turns setting prices, taking into account
the other seller's current price. After the price has been set, the consumers then
respond instantaneously and deterministically, choosing either seller 1's product
or seller 2's product (or no product), based on the current price pair (p 1
leading to an instantaneous reward or profit (R 1 given to sellers 1 and 2
respectively. We assume initially that the sellers have full knowledge of the expected
consumer demand for any given price pair, and in fact have full knowledge
of both profit functions.
Our work builds on prior research reported in (Tesauro and Kephart, 1998;
Tesauro and Kephart, 1999). Those papers examined the effect of including fore-
sight, i.e. an ability to anticipate longer-term consequences of an agent's current
action. Two different algorithms for agent foresight were presented: (i) a generalization
of the minimax search procedure in two-player zero-sum games; (ii)
a generalization of the Policy Iteration method from dynamic programming, in
which both players' policies are simultaneously improved, until self-consistent
policy pairs are obtained that optimize expected reward over two time steps. It
was found that including foresight in the agents' pricing algorithms generally
improved overall agent profitability, and usually damped out or eliminated the
pathological behavior of unending cyclic "price wars," in which long episodes of
repeated undercutting amongst the sellers alternate with large jumps in price.
Such price wars were found to be rampant in prior studies of agent economy
models (Kephart, Hanson and Sairamesh, 1998; Sairamesh and Kephart, 1998)
when the agents use "myopically optimal" or "myoptimal" pricing algorithms
that optimize immediate reward, but do not anticipate the longer-term consequences
of an agent's current price setting.
Our motivation for studying simultaneous Q-learning in this paper is three-
fold. First, if Q-functions can be learned simultaneously and self-consistently for
both players, the policies implied by those Q-functions should be self-consistently
optimal. In other words, an agent will be able to correctly anticipate the longer-term
consequences of its own actions, the other agents' actions, and will correctly
model the other agents as having an equivalent capability. Hence the classic problem
of infinite recursion of opponent models will be avoided. In contrast, in other
approaches to adaptive multi-agent system, these issues are more problematic.
For example, (Vidal and Durfee, 1998) propose a recursive opponent modeling
scheme, in which level-0 agents do no opponent modeling, level-1 agents model
the opponents as being level-0, level-2 agents model the opponents as being
level-1, etc. In both of these approaches, there is no effective way for an agent
to model other agents as being at an equivalent level of depth or complexity.
The second advantage of Q-learning is that the solutions should correspond
to deep lookahead: in principle, the Q-function represents the expected reward
looking infintely far ahead in time, exponentially weighted by a discount parameter
In contrast, the prior work of (Tesauro and Kephart, 1999) was
based on shallow finite lookahead. Finally, in comparison to directly modeling
agent policies, the Q-function approach seems more extensible to the situation
of very large economies with many competing sellers. Our intuition is that approximating
Q-functions with nonlinear function approximators such as neural
networks is more feasible than approximating the corresponding policies. Fur-
thermore, in the Q-function approach, each agent only needs to maintain a single
Q-function for itself, whereas in the policy modeling approach, each agent needs
to maintain a policy model for every other agent; the latter seems infeasible
when the number of sellers is large.
The remainder of this paper is organized as follows. Section 2 describes the
structure and dynamics of our model two-seller economy, and presents three
economically-based models of seller profit (Price-Quality, Information-Filtering,
and Shopbot) which are known to be prone to price wars when agents myopically
optimize their short-term payoffs. We deliberately choose parameters to place
each of these systems in a price-war regime. In section 3, we describe details
of how we implement Q-learning in these model economies. As a first step, we
examine the simple case of ordinary Q-learning, where one of the two sellers uses
Q-learning and the other seller uses a fixed pricing policy (the myopically opti-
mal, or "myoptimal" policy). We then examine the more interesting and novel
situation of simultaneous Q-learning by both sellers. Finally, section 5 summarizes
the main conclusions and discusses promising directions and challenges for
future work.
Model agent economies
Real agent economies are likely to contain large numbers of agents, with complex
details of how the agents behave and interact with each other on multiple time
scales. Our approach toward modeling and understanding such complexity is to
begin by making a number of simplifying assumptions. We first consider the
simplest possible case of two competing seller agents offering similar or identical
products to a large population of consumer agents. The sellers compete on the
basis of price, and we assume that prices are discretized and can lie between a
minimumand maximum price, such that the number of possible prices is at most
a few hundred. This renders the state space small enough that it is feasible to
use lookup tables to represent the agents' pricing policies and expected profits.
Time in the simulation is also discretized; at each time step, we assume that the
consumers compare the current prices of the two sellers, and instantaneously
and deterministically choose to purchase from at most one seller. Hence at each
time step, for each possible pair of seller prices, there is a deterministic reward
or profit given to each seller. The simulation can iterate forever, and there may
or may not be a discounting factor for the present value of future rewards.
It is worth noting that the consumers are not regarded as "players" in the
model. The consumers have no strategic role: they behave according to an extremely
simple, fixed, short-term greedy rule (buy the lowest priced product at
each time step), and are regarded as merely providing a stationary environment
in which the two sellers can compete in a two-player game. This is clearly a simplifying
first step in the study of multi-agent phenomena, and in future work,
the models will be extended to include strategic and adaptive behavior on the
part of the consumers as well. This will change the notion of "desirable" system
behavior. In the present model, desirable behavior would resemble "collusion"
between the two sellers in charging very high prices, so that both could obtain
high profits. Obviously this is not desirable from the consumers' viewpoint.
Regarding the dynamics of seller price adjustments, we assume that the sellers
alternately take turns adjusting their prices, rather than simultaneously setting
prices (i.e. the game is extensive-form rather than normal-form). Our choice of
alternating-turn dynamics is motivated by two considerations: (a) As the number
of sellers becomes large and the model becomes more realistic, it seems more
reasonable to assume that the sellers will adjust their prices at different times
rather than at the same time, although they probably will not take turns in
a well-defined order. (b) With alternating-turn dynamics, we can stay within
the normal Q-learning framework where the Q-function implies a deterministic
optimal policy: it is known that in two-player alternating turn games, there always
exists a deterministic policy that is as good as any non-deterministic policy
(Littman, 1994). In contrast, in games with simultaneous moves (for example,
rock-paper-scissors), it is possible that no deterministic policy is optimal, and
that the existing Q-learning formalism for MDPs would have to be modified and
extended so that it could yield non-deterministic optimal policies.
We study Q-learning in three different economic models that have been described
in detail elsewhere (Sairamesh and Kephart, 1998; Kephart, Hanson and
Sairamesh, 1998; Greenwald and Kephart, 1999). The first model, called the
"Price-Quality" model (Sairamesh and Kephart, 1998), models the sellers' products
as being distinguished by different values of a scalar "quality" parameter,
with higher-quality products being perceived as more valuable by the consumers.
The consumers are modeled as trying to obtain the lowest-priced product at each
time step, subject to threshold-type constraints on both quality and price, i.e.,
each consumer has a maximum allowable price and a minimum allowable qual-
ity. The similarity and substitutability of seller products leads to a potential for
direct price competition; however, the "vertical" differentiation due to differen-
ing quality values leads to an asymmetry in the sellers' profit functions. It is
believed that this asymmetry is responsible for the unending cyclic price wars
that emerge when the sellers employ myoptimal pricing strategies.
The second model is an "Information-Filtering" model described in detail in
(Kephart, Hanson and Sairamesh, 1998). In this model there are two competing
sellers of news articles in somewhat overlapping categories. In contrast to the
vertical differentiation of the Price-Quality model, this model contains a horizontal
differentiation in the differing article categories. To the extent that the
categories overlap, there can be direct price competition, and to the extent that
they differ, there are asymmetries introduced that again lead to the potential
for cyclic price wars.
The third model is the so-called "Shopbot" model described in (Greenwald
and Kephart, 1999), which is intended to model the situation on the Internet in
which some consumers may use a Shopbot to compare prices of all sellers offering
a given product, and select the seller with the lowest price. In this model, the
sellers' products are exactly identical and the profit functions are symmetric.
Myoptimal pricing leads the sellers to undercut each other until the minimum
price point is reached. At that point, a new price war cycle can be launched,
due to buyer asymmetries rather than seller asymmetries. The fact that not all
buyers use the Shopbot, and some buyers instead choose a seller at random,
means that it can be profitable for a seller to abandon the low-price competition
for the bargain hunters, and instead maximally exploit the random buyers by
charging the maximum possible price.
An example profit function that we study, taken from the Price-Quality
model, is as follows: Let p 1 and p 2 represent the prices charged by seller 1 and
seller 2 respectively. Let q 1 and q 2 represent their respective quality parameters,
with cost to a seller of producing an item of
quality q. Then assuming the particular model of consumer behavior described
in (Sairamesh and Kephart, 1998) one can show analytically that in the limit of
infinitely many consumers, the instantaneous profits per consumer R 1 and R 2
obtained by seller 1 and seller 2 respectively are given by:
ae
(1)
ae
(2)
A plot of the profit landscape for seller 1 as a function of prices p 1 and p 2
is given in figure 1, for the following parameter settings: q
q). (These specific parameter settings were chosen because they
are known to generate harmful price wars when the agents use myopic optimal
pricing.) We can see in this figure that the myopic optimal price for seller 1 as a
function of seller 2's price, p (p 2 ), is obtained for each value of p 2 by sweeping
across all values of p 1 and choosing the value that gives the highest profit. We
can see that for small values of p 2 , the peak profit is obtained at
whereas for larger values of p 2 , there is eventually a discontinuous shift to the
other peak, which follows along the parabolic-shaped ridge in the landscape. An
analytic expression for the myopic optimal price for seller 1 as a function of p 2
is as follows (defining x
Similarly, the myopic optimal price for seller 2 as a function of the price set
by seller 1, p (p 1 ), is given by the following formula (assuming that prices are
discrete and that ffl is the price discretization interval):
We also note in passing that there are similar profit landscapes for each of the
sellers in the Information-Filtering model and in the Shopbot model. In all three
Fig. 1. Sample profit landscape for seller 1 in Price-Quality model, as a function of
seller 1 price p1 and seller 2 price p2 .
models, it is the existence of multiple, disconnected peaks in the landscapes,
with relative heights that can change depending on the other seller's price, that
leads to price wars when the sellers behave myopically.
Regarding the information set that is made available to the sellers, we have
made a simplifying assumption as a first step that the players have essentially
perfect information. They can model the consumer behavior perfectly, and they
also have perfect knowledge of each other's costs and profit functions. Hence our
model is thus a two-player perfect-information deterministic game that is very
similar to games like chess. The main differences are that the profits in our model
are not strictly zero-sum, and that there are no terminating or absorbing nodes
in our model's state space. Also in our model, payoffs are given to the players
at every time step, whereas in games such as chess, payoffs are only given at the
terminating nodes.
As mentioned previously, we constrain the prices set by the two sellers to
lie in a range from some minimum to maximum allowable price. The prices are
discretized, so that one can create lookup tables for the seller profit functions
Furthermore, the optimal pricing policies for each seller as a function
of the other seller's price, p (p 2 ) and p (p 1 ), can also be represented in the form
of table lookups.
3 Single-agent Q-learning
We first consider ordinary single-agent Q-learning in the above two-seller economic
models. The procedure for Q-learning is as follows. Let Q(s; a) represent
the discounted long-term expected reward to an agent for taking action a in
state s. The discounting of future rewards is accomplished by a discount parameter
fl such that the value of a reward expected at n time steps in the future is
discounted by fl n . Assume that the Q(s; a) function is represented by a lookup
table containing a value for every possible state-action pair, and assume that the
table entries are initialized to arbitrary values. Then the procedure for solving
for Q(s; a) is to infinitely repeat the following two-step loop:
1. Select a particular state s and a particular action a, observe the immediate
reward r for this state-action pair, and observe the resulting state s 0 .
2. Adjust Q(s; a) according to the following equation:
\DeltaQ(s;
where ff is the learning rate parameter, and the max operation represents choosing
the optimal action b among all possible actions that can be taken in the successor
state s 0 leading to the greatest Q-value. A wide variety of methods may
be used to select state-action pairs in step 1, provided that every state-action
pair is visited infinitely often. For any stationary Markov Decision Problem, the
Q-learning procedure is guaranteed to converge to the correct values, provided
that ff is decreased over time with an appropriate schedule.
We first consider using Q-learning for one of the two sellers in our economic
models, while the other seller maintains a fixed pricing policy. In the simulations
described below the fixed policy is in fact the myoptimal policy p represented
for example in the Price-Quality model by equations 3 and 4.
In our pricing application, the distinction between states and actions is somewhat
blurred. We will assume that the "state" for each seller is sufficiently described
by the other seller's last price, and that the ``action'' is the current price
decision. This should be a sufficient state description because no other history
is needed either for the determination of immediate reward, or for the calculation
of the myoptimal price by the fixed-strategy player. We have also modified
the concepts of immediate reward r and next-state s 0 for the two-agent case.
We define s 0 as the state that is obtained, starting from s, of one action by the
Q-learner and a response action by the fixed-strategy opponent. Likewise, the
immediate reward is defined as the sum of the two rewards obtained after those
two actions. These modifications were introduced so that the state s 0 would have
the same player to move as state s. possible alternative to this, which we have
not investigated, is to include the side-to-move as additional information in the
state-space description.)
In the simulations reported below, the sequence of state-action pairs selected
for the Q-table updates were generated by uniform random selection from
amongst all possible table entries. The initial values of the Q-tables were generally
set to the immediate reward values. (Consequently the initial Q-derived
policies corresponded to myoptimal policies.) The learning rate was varied with
time according to:
where the initial learning rate ff(0) was usually set to 0.1, and the constant
when the simulation time t was measured in units of N 2 , the size of
the Q-table. (N is the number of possible prices that could be selected by either
player.) A number of different values of the discount parameter fl were studied,
ranging from
Results for single-agent Q-learning in all three models indicated that Q-learning
worked well (as expected) in each case. In each model, for each value of
the discount parameter, exact convergence of the Q-table to a stationary optimal
solution was found. The convergence times ranged from a few hundred sweeps
through each table element, for smaller values of fl, to at most a few thousand
updates for the largest values of fl. In addition, once Q-learning converged, we
then measured the expected cumulative profit of the policy derived from the
Q-function. We ran the Q-policy against the other player's myopic policy from
100 random starting states, each for 200 time steps, and averaged the resulting
cumulative profit for each player. We found that, in each case, the seller achieved
greater profit against a myopic opponent by using a Q-derived policy than by
using a myopic policy. (This was true even for due to the
redefinition of Q updates summing over two time steps, the case effectively
corresponds to a two-step optimization, rather than the one-step optimization
of the myopic policies.) Furthermore, the cumulative profit obtained with the
Q-derived policy monotonically increased with the increasing fl (as expected).
It was also interesting to note that in many cases, the expected profit of the
myopic opponent also increased when playing against the Q-learner, and also
improved monotonically with increasing fl. The explanation is that, rather than
better exploiting the myopic opponent, as would be expected in a zero-sum game,
the Q-learner instead reduced the region over which it would participate in a
mutually undercutting price war. Typically we find in these models that with
myopic vs. myopic play, large-amplitude price wars are generated that start at
very high prices and persist all the way down to very low prices. When a Q-
learner competes against a myopic opponent, there are still price wars starting
at high prices, however, the Q-learner abandons the price war more quickly as the
prices decrease. The effect is that the price-war regime is smaller and confined
to higher average prices, leading to a closer approximation to cooperative or
collusive behavior, with greater expected utilites for both players.
An illustrative example of the results of single-agent Q-learning is shown in
figure 2. Figure 2(a) plots the average profit for both sellers in the Shopbot
model, when one of the sellers is myopic and the other is a Q-learner. (As the
model is symmetric, it doesn't matter which seller is the Q-learner.) Figure 2(b)
plots the myopic price curve of seller 2 against the Q-derived price curve (at
of seller 1. We can see that both curves have a maximum price of 1
and a minimum price of approximately 0.58. The portion of both curves lying
vs. Q; Shopbot Model
Myopic vs. Myopic
Average
profit
pFig. 2. Results of single-agent Q-learning in the Shopbot model. (a) Average profit
per time step for Q-learner (seller 1, filled circles) and myopic seller (seller 2, open
circles) vs. discount parameter fl. Dashed line indicates baseline expected profit when
both sellers are myopic. (b) Cross-plot of Q-derived price curve (seller 1) vs. myopic
price curve (seller 2) at Dashed line and arrows indicate a temporal price-pair
trajectory using these policies, starting from filled circle.
along the diagonal indicates undercutting behavior, in which case the seller will
respond to the opponent's price by undercutting by ffl, the price discretization
interval.
The system dynamics for the state (p 1 figure 2(b) can be obtained
by alternately applying the two pricing policies. This can be done by a simple
iterative graphical construction, in which for any given starting point, one first
holds moves horizontally to the p 1 (p 2 ) curve, and then one holds
moves vertically to the p 2 (p 1 ) curve. We see in this figure that
the iterative graphical construction leads to an unending cyclic price war, whose
trajectory is indicated by the dashed line. Note that the price-war behavior
begins at the price pair (1, 1), and persists until a price of approximately 0.83.
At this point, seller 1 abandons the price war, and resets its price to 1, leading
once again to another round of undercutting.
The amplitude of this price war is diminished compared to the situation in
which both players use a myopic policy. In that case, seller 1's curve would be a
mirror image of seller 2's curve, and the price war would persist all the way to
the minimum price point, leading to a lower expected profit for both sellers.
Multi-agent Q-learning
We now examine the more interesting and challenging case of simultaneous training
of Q-functions and policies for both sellers. Our approach is to use the same
formalism presented in the previous section, and to alternately adjust a random
entry in seller 1's Q-function, followed by a random entry in seller 2's Q-function.
As each seller's Q-function evolves, the seller's pricing policy is correspondingly
updated so that it optimizes the agent's current Q-function. In modeling the
two-step payoff r to a seller in equation 5, we use the opponent's current policy
as implied by its current Q-function. The parameters in the experiments below
were generally set to the same values as in the previous section. In most of the
experiments, the Q-functions were initialized to the instantaneous payoff values
(so that the policies corresponded to myopic policies), although other initial
conditions were explored in a few experiments.
vs. Q; PQ Model
Myopic vs. Myopic (1)
Myopic vs. Myopic (2)
Average
profit
vs. Q; PQ model (any g)
Fig. 3. Results of simultaneous Q-learning in the Price-Quality model. (a) Average
profit per time step for seller 1 (solid diamonds) and seller 2 (open diamonds) vs.
discount parameter fl. Dashed line indicates baseline myopic vs. myopic expected profit.
Note that seller 2's profit is higher than seller 1's, even though seller 2 has a lower
quality parameter. (b) Cross-plot of Q-derived price curves (at any fl). Dashed line
and arrows indicate a sample price dynamics trajectory, starting from the filled circle.
The price war is eliminated and the dynamics evolves to a fixed point indicated by an
open circle.
For simultaneous Q-learning in the Price-Quality model, we find robust convergence
to a unique pair of pricing policies, independent of the value of fl, as
illustrated in figure 3(b). This solution also corresponds to the solution found by
generalized minimax and by generalized DP in (Tesauro and Kephart, 1999). We
note that repeated application of this pair of price curves leads to a dynamical
trajectory that eventually converges to a fixed-point located at (p
0:4). A detailed analysis of these pricing policies and the fixed-point solution is
presented in (Tesauro and Kephart, 1999). In brief, for sufficiently low prices
of seller 2, it pays seller 1 to abandon the price war and to charge a very high
price, 0:9. The value of then corresponds to the highest price
that seller 2 can charge without provoking an undercut by seller 1, based on
a two-step lookahead calculation (seller 1 undercuts, and then seller 2 replies
with a further undercut). We note that this fixed point does not correspond to
a Nash equilibrium, since both players have an incentive to deviate, based on
a one-step lookahead calculation. It was conjectured in (Tesauro and Kephart,
1999) that the solution observed in figure 3(b) corresponds to a subgame-perfect
equilibrium (Fudenberg and Tirole, 1991) rather than a Nash equilibrium.
The cumulative profits obtained by the pair of pricing policies are plotted in
figure 3(a). It is interesting that seller 2, the lower-quality seller, actually obtains
a significantly higher profit than seller 1, the higher-quality seller. In contrast,
with myopic vs. myopic pricing, seller 2 does worse than seller 1.
vs. Q; Shopbot Model
Myopic vs. Myopic
Average
profit
Fig. 4. Results of simultaneous Q-learning in the Shopbot model. (a) Average profit
per time step for seller 1 (solid diamonds) and seller 2 (open diamonds) vs. discount
parameter fl. Dashed line indicates baseline myopic vs. myopic expected profit. (b)
Cross-plot of Q-derived price curves at the solution is symmetric. Dashed line
and arrows indicate a sample price dynamics trajectory. (c) Cross-plot of Q-derived
price curves at 0:9; the solution is asymmetric.
In the Shopbot model, we did not find exact convergence of the Q-functions
for each value of fl. However, in those cases where exact convergence was not
found, we did find very good approximate convergence, in which the Q-functions
and policies converged to stationary solutions to within small random fluctua-
tions. Different solutions were obtained at each value of fl. We generally find
that a symmetric solution, in which the shapes of are iden-
tical, is obtained at small fl, whereas a broken symmetry solution, similar to
the Price-Quality solution, is obtained at large fl. We also found a range of fl
values, between 0.1 and 0.2, where either a symmetric or asymmetric solution
could be obtained, depending on initial conditions. The asymmetric solution was
counter-intuitive to us, because we expected that the symmetry of the two sell-
ers' profit functions would lead to a symmetric solution. In hindsight, we can
apply the same type of reasoning as in the Price-Quality model to explain the
asymmetric solution. A plot of the expected profit for both sellers as a function
of fl is shown in figure 4(a). Plots of the symmetric and asymmetric solution,
obtained at respectively, are shown in figures 4(b) and 4(c).
Myopic vs. Myopic (1)
Myopic vs. Myopic (2)
Average
profit
vs. Q; IF model (g=0.5)
pFig. 5. Results of multi-agent Q-learning in the Information-Filtering model. (a) Average
profit per time step for seller 1 (solid diamonds) and seller 2 (open diamonds)
vs. discount parameter fl. (The data points at
Q-functions and policies.) Dashed lines indicates baseline expected profit when both
sellers are myopic. (b) Cross-plot of Q-derived price curves at
Finally, in the Information-Filtering model, we found that simultaneous Q-learning
produced exact or good approximate convergence for small values of
0:5). For large values of fl, no convergence was obtained. The
simultaneous Q-learning solutions yielded reduced-amplitude price wars, and
montonically increasing profitability for both sellers as a function of fl, at least
up to 0:5. A few data points were examined at fl ? 0:5, and even though
there was no convergence, the Q-policies still yielded greater profit for both
sellers than in the myopic vs. myopic case. A plot of the Q-derived policies and
system dynamics for shown in figure 5(b). The expected profits for
both players as a function of fl is plotted in figure 5(a).
Conclusions
We have examined single-agent and multi-agent Q-learning in three models of
a two-seller economy in which the sellers alternately take turns setting prices,
and then instantaneous profits are given to both sellers based on the current
price pair. Such models fall into the category of two-player, alternating-turn,
arbitrary-sum Markov games, in which both the rewards and the state-space
transitions are deterministic. The game is Markov because the state space is
fully observable and the rewards are not history dependent.
In all three models (Price-Quality, Information-Filtering, and Shopbot), large-amplitude
cyclic price wars are obtained when the sellers myopically optimize
their instantaneous profits without regard to longer-term impact of their pricing
policies. We find that, in all three models, the use of Q-learning by one of
the sellers against a myopic opponent invariably results in exact convergence
to the optimal Q-function and optimal policy against that opponent, for all allowed
values of the discount parameter fl. The use of the Q-derived policy yields
greater expected profit for the Q-learner, with monotonically increasing profit
as fl increases. In many cases, it has a side benefit of also enhancing the welfare
of the myopic opponent. This comes about by reducing the amplitude of the
undercutting price-war regime, or in some cases, eliminating it completely.
We have also studied the more interesting and challenging situation of simultaneously
training Q-functions for both sellers. This is more difficult because
as each seller's Q-function and policy change, it provides a non-stationary environment
for adaptation of the other seller. No convergence proofs exist for such
simultaneous Q-learning by multiple agents. Nevertheless, despite the absence
of theoretical guarantees, we do find generally good behavior of the algorithm
in our model economies. In two of the models (Shopbot and Price-Quality),
we find exact or very good approximate convergence to simultaneously self-consistent
Q-functions and optimal policies for any value of fl, whereas in the
Information-Filtering model, simultaneous convergence was found for fl - 0:5.
In the Information-Filtering and Shopbot models, monotonically increasing expected
profits for both sellers were also found for small values of fl. In the
Price-Quality model, simultaneous Q-learning yields an asymmetric solution,
corresponding to the solution found in (Tesauro and Kephart, 1999), that is
highly advantageous to the lesser-quality seller, but slightly disadvantageous to
the higher-quality seller, when compared to myopic vs. myopic pricing. A similar
asymmetric solution is also found in the Shopbot model for large fl, even though
the profit functions for both players are symmetric.
For each model, there exists a range of discount parameter values where
the solutions obtained by simultaneous Q-learning are self-consistently optimal,
and outperform the solutions obtained in (Tesauro and Kephart, 1999). This
is presumably because the previously published methods were based on limited
lookahead, whereas the Q-functions in principle look ahead infinitely far, with
appropriate discounting.
It is intruiging that simultaneous Q-learning works well in our models, despite
the lack of theoretical convergence proofs. Sandholm and Crites also found that
simultaneous Q-learning generally converged in the Iterated Prisoner's Dilemma
game. These empirical findings suggest that a deeper theoretical analysis of
simultaneous Q-learning may be worth investigating. There may be some underlying
theoretical principles that can explain why simultaneous Q-learning works,
for at least certain classes of arbitrary-sum profit functions.
Several important challenges will also be faced in extending our approach to
larger-scale, more realistic simulations. While there are some economic situations
in the real world where there are only two dominant sellers, in general the number
of sellers can be much greater. The situation that we foresee in agent economies
is that the number of competing sellers will be very large. In this case, the seller
profits and pricing functions will have such high input dimensionality that it
will be infeasible to use lookup table state-space representations, and most likely
some sort of compact representation combined with a function approximation
scheme will be necessary. Furthermore, with many sellers, the concept of sellers
taking turns adjusting their prices in a well-defined order becomes problematic.
This could lead to an additional combinatorial explosion, if the mechanism for
calculating expected reward has to anticipate all possible orderings of opponent
responses.
Furthermore, while our economic models have a moderate degree of realism
in their profit functions, they are unrealistic in the assumptions of knowledge
and dynamics. In the work reported here, the state space was fully observable infinitely
frequently at zero cost and with zero propagation delays. The expected
consumer demand for a given price pair was instantaneous, deterministic and
fully known to both players. Indeed, the players' exact profit functions were
fully known to both players. It was also assumed that the players would alternately
take turns equally often in a well-defined order in adjusting their prices.
Under such assumptions of knowledge and dynamics, one could hope to develop
an algorithm that could calculate in advance something like a game-theoretic
optimal pricing algorithm for each agent.
However, in realistic agent economies, it is likely that agents will have much
less than full knowledge of the state of the economy. Agents may not know the
details of other agents' profit functions, and indeed an agent may not know its
own profit function, to the extent that buyer behavior is unpredictable. The
dynamics of buyers and sellers may also be more complex, random and unpredictable
than what we have assumed here. There may also be information delays
for both buyers and sellers, and part of the economic game may involve paying
a cost in order to obtain information about the state of the economy faster and
more frequently, and in greater detail. Finally, we expect that buyer behavior
will be non-stationary, so that there will be a more complex co-evolution of buyer
and seller strategies.
While such real-world complexities are daunting, there are reasons to believe
that learning approaches such as Q-learning may play a role in practical solu-
tions. The advantage of Q-learning is that one does not need a model of either
the instantaneous payoffs or of the state-space transitions in the environment.
One can simply observe actual rewards and transitions and base learning on
that. While the theory of Q-learning requires exhaustive exploration of the state
space to guarantee convergence, this may not be necessary when function approximators
are used. In that case, after training a function approximator on a
relatively small number of observed states, it may then generalize well enough on
the unobserved states to give decent practical performance. Several recent empirical
studies have provided evidence of this (Tesauro, 1995; Crites and Barto,
1996; Zhang and Dietterich, 1996).
Acknowledgements
The authors thank Amy Greenwald for helpful discussions regarding the Shopbot
model.
--R
"Improving elevator performance using reinforcement learning."
Game Theory.
"Shopbots and pricebots."
"Multiagent reinforcement learning: theoretical frame-work and an algorithm."
"Price-war dynamics in a free-market economy of software agents."
A Course in Microeconomic Theory.
"Markov games as a framework for multi-agent reinforcement learn- ing,"
"Dynamics of price and quality differentiation in information and computational markets."
"On multiagent Q-Learning in a semi-competitive domain."
"Temporal difference learning and TD-Gammon."
"Foresight-based pricing algorithms in an economy of software agents."
"Foresight-based pricing algorithms in agent economies."
"Learning nested agent models in an information economy,"
"Learning from delayed rewards."
"Q-learning."
"High-performance job-shop scheduling with a time-delay TD(-) network."
--TR
--CTR
Prithviraj (Raj) Dasgupta , Yoshitsugu Hashimoto, Multi-Attribute Dynamic Pricing for Online Markets Using Intelligent Agents, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.277-284, July 19-23, 2004, New York, New York
Simon Parsons , Michael Wooldridge, Game Theory and Decision Theory in Multi-Agent Systems, Autonomous Agents and Multi-Agent Systems, v.5 n.3, p.243-254, September 2002
Leigh Tesfatsion, Agent-Based Computational Economics: Growing Economies From the Bottom Up, Artificial Life, v.8 n.1, p.55-82, March 2002
Cooperative Multi-Agent Learning: The State of the Art, Autonomous Agents and Multi-Agent Systems, v.11 n.3, p.387-434, November 2005 | agent economies;reinforcement learning;adaptive multi-agent systems;machine learning |
608736 | Unified Interprocedural Parallelism Detection. | In this paper, we outline a new way of detecting parallelism interprocedurally within a program. Our method goes beyond mere dependence testing, to embrace methods of removing dependences as well, namely privatization, induction recognition and reduction recognition. This method is based on a combination of techniques: a universal form for representing memory accesses within a section of code (the Linear Memory Access Descriptor), a technique for classifying memory locations based on the accesses made to them by the code (Memory Classification Analysis), and a dependence test (the Access Region Test). The analysis done with Linear Memory Access Descriptors is based on an intersection operation, for which we present an algorithm. Linear Memory Access Descriptors are independent of any declarations that may exist in a program, so they are subroutine- and language-independent. This makes them ideal for use in interprocedural analysis. Our experiments indicate that this test is highly effective for parallelizing loops containing very complex subscript expressions. | Introduction
Modern computer architectures, with ever-faster processors, make it
increasingly important for parallelizing compilers to do their analysis
interprocedurally. A compiler that parallelizes only intraprocedurally
is conned to parallelizing loops in the leaf nodes of a call graph.
There are, quite often, not enough operations in the leaf nodes to
make parallelization pay o. For loop parallelization within the shared
memory model, the compiler should parallelize at the highest level in
the call graph where parallelization is possible, to overcome parallel
loop overhead costs. In addition, interprocedural dependence analysis
is essential for producing SPMD message passing code from a serial
program.
This work is supported in part by Army contract DABT63-95-C-0097; Army
contract N66001-97-C-8532; NSF contract MIP-9619351; and a Partnership Award
from IBM and the KAIST seed grant program. This work is not necessarily
representative of the positions of the Army or the Government.
c
2000 Kluwer Academic Publishers. Printed in the Netherlands.
Traditional dependence testing has been developed without regard
to its applicability across procedure boundaries. All pairs of memory
references which may access the same memory location within a loop
are compared. These memory references occur at discrete points within
the loop, thus we say that these methods are point-to-point dependence
tests. Point-to-point tests require O(n 2 ) comparisons where n
is the number of memory references to a particular array within the
loop. Obviously, as the position within the call graph gets further
from the leaves, the number n can grow, and this growth can cause
interprocedural point-to-point dependence testing to get unwieldy.
These considerations have motivated us to take a dierent approach
to dependence testing: the memory accesses within a program section
are summarized, then the summaries are intersected to determine dependence
between the sections. Through a series of experiments, we
have found that this approach not only reduces the number of comparisons
for dependence testing, but also allows us to handle very complex
array subscript expressions.
This paper is organized as follows. After discussing previous related
work in Section 2, we will continue our discussion in Section 3 by
showing how we summarize the memory access activity of an arbitrary
program section. Then, in Section 4 , we will describe a novel
notation which is practical for summarizing various complex array
accesses encountered in many scientic programs, and in Section 5,
we show how to use access summaries stored in the this notation
to perform multiple-subscript, interprocedural, summary-based dependence
testing. To evaluate the eectiveness of our dependence test,
we implemented it in the Polaris [4] compiler and experimented with
actual codes from Perfect, SPEC, and NASA benchmark suites. The
experimental results presented in Section 6 show that our test holds
promise for better detection of parallelism in actual codes than other
tests.
2. Previous Work
2.1. Intraprocedural Dependence Testing
Most point-to-point dependence testing methods rely on an equation-
solving paradigm, where the pair of subscript expressions for two array
reference sites being checked for dependence are equated. Then an attempt
is made to determine whether the equation can have a solution,
subject to constraints on the values of variables in the program, such
as loop indices. In the general case, a system of linear relations is built
and a solution is attempted with a linear system solver to determine if
the same memory location can be accessed in dierent loop iterations.
Two of the earliest point-to-point dependence tests were the GCD
Test and the Banerjee Test [2]. In practice, these were simple, e-cient,
and successful at determining dependence, since most subscript expressions
occurring in real scientic programs are very simple. However, the
simplicity of these tests results in some limitations. For instance, they
are not eective for determining dependence for multidimensional arrays
with coupled subscripts, as stated in [9]. Several multiple-subscript
tests have been developed to overcome this limitation: the multidimensional
GCD Test [2], the -test [12], the Power Test [21], and the Delta
Test
The above tests are exact in commonly occurring special cases, but
in some cases are still too conservative. The Omega Test [16] provides
a more general method, based on sets of linear constraints, capable of
handling dependence problems as integer programming problems.
All of the just-mentioned tests have the common problem that they
cannot handle subscript expressions which are non-a-ne. Non-a-ne
subscript expressions occur in irregular codes (subscripted-subscript
access), in FFT codes (subscripts frequently involving 2 I ), and as a
result of compiler transformations (induction variable closed forms and
inline expansion of subroutines). To solve this problem, Pugh, et al.[17]
enhanced the Omega test with techniques for replacing the non-a-ne
terms in array subscripts with symbolic variables. This technique does
not work in all situations, however. The Range Test [3, 4] was built
to provide a better solution to this problem. It handles non-a-ne subscript
expressions without losing accuracy. Overall, the Range Test is
almost as eective as the Omega Test and sometimes out-performs it,
due mainly to its accuracy for non-a-ne expressions [3]. One critical
drawback of the Range Test is that it is not a multiple-subscript test,
and so is not eective for handling coupled-subscripts.
2.2. Interprocedural Summarization Techniques
Interprocedural dependence testing demands new capabilities from dependence
tests. Point-to-point testing becomes unwieldy across procedure
boundaries, and so has given way to dependence testing using
summaries of the accesses made in subroutines. The idea of using access
summaries for dependence analysis was previously proposed by several
researchers such as Balasundaram, et al.[1] and Tang [18]. Also, the
Range Test, though it is a point-to-point test, uses summarized range
information for variables, obtained through abstract interpretation of
the program.
To perform accurate dependence analysis with access summaries, the
compiler needs some standard notation in which the information about
array accesses is summarized and stored for its dependence analyzer.
Several notations have been developed and used for dependence analysis
techniques. Most notable are triplet notation [3, 10, 19] and sets
of linear constraints [1, 7, 18]. However, as indicated in [11], existing
dependence analysis techniques have deciencies directly traceable to
the notations they used for access summarization. Triplet notation is
simple to work with, but not rich enough to store all possible access
patterns. Linear constraints are more general, but can not precisely
represent the access patterns due to non-a-ne subscript expressions,
and require much more complex operations.
So, clearly there is room for a new dependence test and a new
memory access representation, to overcome the limitations of existing
techniques.
2.3. Parallelism Detection
While dependence testing has been studied exhaustively, a topic which
has not been adequately addressed is a unied method of parallelism
detection, which not only nds dependences, but categorizes them for
easy removal with important compiler transformations.
Eigenmann, et al [8] studied a set of benchmark programs and
determined that the most important compiler analyses needed to parallelize
them were array privatization, reduction and induction (idiom)
analysis, and dependence analysis for non-a-ne subscript expressions,
and that all of those must be done in the presence of strong symbolic
interprocedural analysis.
The need for improved analysis and representational techniques prompts
us to go back to rst principles, rethink what data dependence means
and ask whether dependence analysis can be done with compiler
transformations in mind.
The key contribution of this paper is the description of a general
interprocedural parallelism detection technique. It includes a general
dependence analysis technique, described in Section 5, called the Access
Region Test (ART). The ART is a multiple-subscript, interprocedural,
summary-based dependence test, combining privatization and idiom
recognition. It represents memory locations in a novel form called an
Access Region Descriptor (ARD)[11], described in Section 4, based on
the Linear Memory Access Descriptor of [14].
3. Memory Classication Analysis
In this section, we formulate data dependence analysis in terms of a
scheme of classifying memory locations, called Memory Classication
Analysis (MCA), based on the order and type of the accesses within a
section of code. The method of classifying memory locations is a general
one, based on abstract interpretation[5, 6] of a program, and may be
used for purposes other than dependence analysis.
The traditional notion of data dependence is based on classifying
the relationship between two accesses to a single memory location. The
operation done (Read or Write), and the order of the accesses determines
the type of the dependence. A data dependence arc is a directed
arc from an earlier instruction (the source) to a later instruction (the
sink), both of which access a single memory location in a program. The
four types of arcs are determined as shown in Table I.
Table
I. Traditional data dependence denition.
Dependence Type Input Flow Anti Output
Earlier access Read Write Read Write
Later access Read Read Write Write
Input dependences can be safely ignored when doing parallelization.
Anti and output dependences (also called memory-related dependences)
can be removed by using more memory, usually by privatizing the
memory location involved. Flow dependences (also called true depen-
dences) can sometimes be removed by transforming the original code
through techniques such as induction variable analysis and reduction
analysis [20].
A generalized notion of data dependence between arbitrary sections
of code can be built by returning to rst principles. Instead of considering
a single instruction as a memory reference point, we can consider an
arbitrary sequence of instructions as an indivisible memory referencing
unit. The only thing we require is that the memory referencing unit
be executed entirely on a single processor. We refer to this memory
referencing unit as a dependence grain.
DEFINITION 1. A section of code representing an indivisible, sequentially
executed unit, serving as the source or sink of a dependence arc
in a program, will be called a dependence grain.
This denition of dependence grain corresponds to the terms coarse-
and ne-grained analysis, which refer to using large and small dependence
grains, respectively.
If we want to know whether two dependence grains may be executed
in parallel, then we must do dependence analysis between the grains.
Since a single grain may access the same memory address many times,
we must summarize the accesses in some useful way and relate the type
and order of the summaries to produce a representative dependence arc
between the two grains.
DEFINITION 2. A representative dependence arc is a single dependence
arc showing the order in which two dependence grains must be
executed to preserve the sequential semantics of the program. A single
representative dependence arc summarizes the information which would
be contained in multiple traditional dependence arcs between single instructions
For medium- and coarse-grain parallelization, there can be many
accesses to a single memory location within each dependence grain.
Instead of keeping track of the dependences between all possible pairs
of references which have a reference site in each grain (as in point-
to-point testing), it is desired to represent the dependence relationship
between the two grains, for an individual memory location, with a single
representative dependence arc.
There are many possible ways to summarize memory accesses. The
needs of the analysis and the desired precision determine which way
is best. To illustrate this idea, the next two sections show two ways of
summarizing accesses: the simple, but low-precision read-only summary
scheme and the more useful write-order summary scheme.
3.1. The Read-only Summary Scheme
It is possible to dene a representative dependence such that it carries
all of the dependence information needed for the potential parallelization
of the two grains. When no dependence exists between any pair
of memory references in the two grains, neither should a representative
dependence exist. When two or more accesses to a memory location
exist in a grain, we must simply nd a way to assign an aggregate
access type to the group, so that we can determine the representative
dependence in a way which retains the information we need for making
parallelization decisions.
Consider two grains which execute in the serial form of a program,
one before the other. One consistent way to summarize dependence
(for a single memory location) between the two grains is to determine
whether the accesses are read-only in each grain, and dene dependence
as in
Table
II. We call this the read-only summary scheme.
Table
II. One possible representative dependence deni-
tion - the read-only summary scheme.
Dependence Type: Input Flow Anti Output
later Read-Only?
Figure
dependence summarization with the read-only
summary scheme. When an input dependence exists between two grains,
it can be ignored. When a
ow dependence exists between grains, in
general the grains must be serialized.
Earlier Grain Later Grain
Flow Dependence
Input Dependence
Earlier Grain Later Grain
Flow Dependence
Input Dependence
Output dependence
Anti dependence
Output Dependence Between Grains
Flow Dependence Between Grains
Figure
1. Dependence between grains depends on whether the two grains are
read-only. The situation on the right shows a case where A can be privatized in
the later grain, eliminating the output dependence.
When an anti dependence exists between grains, it means that only
reads happen in one grain, followed by at least one write in the other.
An output dependence means that at least one write occurs in both
grains. In both anti and output dependence situations, if a write to the
location is the rst access in the later dependence grain, then it would
be possible to run the grains in parallel by privatizing the variable in
the later grain.
However, in the read-only summary scheme we don't keep enough
information in the summary to determine whether a write happened
rst in the later grain or not, so we would miss the opportunity to
parallelize by privatization. This shows that while read-only summarizing
can detect dependences, it does not classify the dependences
clearly enough to allow us to eliminate the dependence by compiler
transformations. We will derive a better scheme in the next section.
3.2. The Write-order Summary Scheme
When the dependence grains are loop iterations, there exists a special
case of the more general problem in that a single section of code represents
all dependence grains. This fact can be used to simplify the
dependence analysis task.
If we were still using read-only summarization and doing loop-based
dependence testing, there would no longer be four cases, just two.
The iteration is either read-only or it is not. However, to be able to
dierentiate between the anti and output dependences which can be
removed by privatization and those which cannot, the case where it is
not read-only can also be divided into two cases: one where a write is
the rst access to the location (WriteFirst) and one where a read is
the rst access (ReadWrite). This gives three overall classes, shown in
Table
III.
Table
III. Loop-based representative dependence table.
Access ReadOnly ReadWrite WriteFirst
Dependence Type Input Flow Anti/Output
When an iteration only reads the location, dependence can be characterized
as an Input dependence (and ignored). When the iteration
reads the location, then writes it, the variable cannot be privatized.
This results in a dependence which cannot be ignored and cannot be
removed by privatization, so it will be called a Flow dependence. When
an iteration writes the location rst, any value in the location when
the iteration starts is immediately over-written, so the variable can be
privatized. Since these dependences can be removed by privatization,
they will be called memory-related dependences.
Since privatization can be done in the memory-related dependence
case, and that case is signaled when a write is the rst access, all we
need to do to identify these cases is to keep track of the case when
a location is written rst. The input and
ow dependence cases are
characterized by a read happening rst, and dierentiated by whether a
occurs later or not. We call this the write-order summary scheme.
It makes sense to use the write-order summarization scheme for the
general case as well as for loops. Any locations which are read-only
in both grains would correspond to an input dependence, those which
are write-rst in the later grain would correspond to a memory related
dependence (since it is written rst in the later grain, the later grain
need not wait for any value from the earlier grain), and all others would
correspond to a
ow dependence. This is illustrated in Table IV.
Table
IV. A more eective way to classify dependences between two arbitrary
dependence grains, using the classes ReadOnly, WriteFirst and ReadWrite -
the write-order summary scheme.
later ReadOnly later WriteFirst later ReadWrite
earlier ReadOnly Input Anti/Output Flow
earlier WriteFirst Flow Anti/Output Flow
earlier ReadWrite Flow Anti/Output Flow
So, the read-only summary scheme could serve as a dependence test,
while the write-order summary scheme can detect dependence as well as
provide the additional information necessary to remove dependences by
a privatization transformation. As we will see in Section 5.4, a few simple
tests added to the write-order summary scheme can collect enough
information to allow some dependences to be removed by induction and
reduction transformations.
3.3. Establishing an Order Among Accesses
Knowing the order of accesses is crucial to the write-order summarization
scheme, so we must establish an ordering of the accesses within the
program. If a program contained only straight-line code, establishing
an ordering between accesses would be trivial. One could simply sweep
through the program in \execution-order", keeping track of when the
accesses happen. But branching statements and unknown variables
make it more di-cult to show that one particular access happens before
another.
For example, take the loop in Figure 2. The write to A(I) happens
before the read of A(I) only if both P and Q are true. But if Q is true and
P is false, then the read happens without the write having happened
rst. If P and Q have values which are unrelated, then the compiler has
no way of knowing the ordering of the accesses to A in this loop. On
the other hand, if the compiler can show that P and Q are related and
that in fact Q being true implies that P must have also been true, the
compiler can know that the write happened rst. So, for code involving
conditional branches, the major tool the compiler has in determining
the ordering of the accesses is logical implication.
To facilitate the use of logical implication to establish execution
order, the representation of each memory reference must potentially
for
if (P) f
Figure
2. Only through logical implication can the compiler determine the ordering
of accesses to array A in the I-loop.
have an execution predicate attached to it. In fact, the access in Figure 2
could be classied as ReadOnly with the condition f:P^Qg, WriteFirst
with condition fPg and ReadWrite otherwise.
DEFINITION 3. The execution predicate is a boolean-valued ex-
pression, attached to the representation of a memory reference, which
species the condition under which the reference actually takes place.
An execution predicate P will be denoted as fPg.
3.4. Using
Summary
Sets to Store Memory Locations
We can classify a set of memory locations according to their access
type by adding a symbolic representation of them to the appropriate
summary set.
DEFINITION 4. A summary set is a symbolic description of a set
of memory locations.
We have chosen to use Access Region Descriptors (ARDs), described
in Section 4, to represent memory accesses within a summary set. To
represent memory accesses for use in the write-order summary scheme,
according to Table III, requires three summary sets for each dependence
grain: ReadOnly (RO), ReadWrite (RW), and WriteFirst (WF).
3.5. Classification of Memory References
Each memory location referred to in the program must be entered into
one of these summary sets, in a process called classication. A program
is assumed to be a series of nested elementary contexts : procedures,
If the programming language does not force this through its structure, then
the program will have to be transformed into that form through a normalization
process.
ReadOnly
ReadOnly
to ReadWrite
new_writefirst
new_writefirst
Figure
3. The intersection of earlier ReadOnly accesses with later WriteFirst
accesses - the result is a ReadWrite set.
simple statements, if statements, loops, and call statements. Thus, at
every point in the program, there will be an enclosing context and an
enclosed context.
The contexts are traversed in \execution order". The summary sets
of the enclosing context are built by (recursively) calculating the summary
sets for each enclosed context and distributing them into the
summary sets of the enclosing context. We can determine memory locations
in common between summary sets by an intersection operation,
as illustrated in Figure 3.5.
Classication takes as input the current state of the three summary
sets for the enclosing context (RO, WF, and RW) and the three new
summary sets for the last enclosed context which was processed (RO n ,
produces updated summary sets for the enclosing
context. The sets for the enclosed context are absorbed in a way which
maintains proper classication for each memory location. For example,
a memory location which was RO in the enclosing context (up to this
point) and is WF or RW in the newly-calculated enclosed context becomes
RW in the updated enclosing context. The steps of classication
can be expressed in set notation, as shown in Figure 4.
3.5.1. Program Context Classication
Simple statements are classied in the obvious way, according to the
order of their reads and writes of memory. All statements within an if
context are classied in the ordinary way, except that the if-condition P
is applied as an execution predicate to the statements in the if block
and :P is applied to the statements in the else block. Descriptors
for the if and else blocks are then intersected and their execution
predicates are or'ed together, to produce the result for the whole if
context.
Classifying the memory accesses in a loop is a two-step process.
First, the summary sets for a single iteration of the loop must be
collected by a scan through the loop body in execution order. They
contain the symbolic form of the accesses, possibly parameterized by
the index of the loop. Next, the summary sets must be expanded by
RO RO (t WF \ RO)
t RO ROn RO
RO RO (t RW \ RO)
Write First:
Memory references
of prior code
Memory references
of new code
Result of classifying
new references000000000000111111111
000000000000 000000000000 000000000000 000000000000 000000000000 000000000000 000000000000 000000000000 000000000000
0000 0000 0000 0000 0000
00000000000000 00000000000000 00000000000000 00000000000000 00000000000000 00000000000000 00000000000000 00000000000000 00000000000000
00000000000000 00000000000000 00000000000000 00000000000000 00000000000000 00000000000000 00000000000000 00000000000000 00000000000000 00000000000000
Read Only: Read Write:
Figure
4. Classication of new summary sets ROn , WFn , and RWn into the existing
summary sets RO, WF, and RW, and a pictorial example of adding new summary
sets to existing summary sets.
the loop index so that at the end of the process, the sets represent the
locations accessed during the entire execution of the loop.
The expansion process can be illustrated by the following loop:
do
do
For a single iteration of the surrounding loop, the location A(I) is
classied WriteFirst. When A(I) is expanded by the loop index I, the
representation A(1:100) results. Summary sets for while loops can be
expanded similarly, but we must use a basic induction variable as a
loop index and represent the number of iterations as \unknown". This
expansion process makes it possible to avoid traversing the back-edges
of loops for classication.
Classication for a call statement involves rst the calculation of
the access representation for the text of the call statement itself, calculation
of the summary sets for the procedure being called, matching
formal with actual parameters, and nally translating the summary
sets involved from the called context to the calling context (described
further for ARDs in Section 4.3).
4. The Access Region Descriptor
To manipulate the array access summaries for dependence analysis, we
needed a notation which could precisely represent a collection of memory
accesses. As brie
y mentioned in Section 2, our previous study [11]
gave us a clear picture of the strengths and weaknesses of existing
notations. It also gave us the requirements the notation should meet to
support e-cient array access summarization.
Complex array subscripts should be represented accurately. In
particular, non-a-ne expressions should be handled because time-critical
loops in real programs often contain array references with
non-a-ne subscripts.
The notation should have simplication operations dened for it,
so that complex accesses could be changed to a simpler form.
To facilitate fast and accurate translation of access summaries
across procedure boundaries, non-trivial array reshaping at a procedure
boundary should be handled e-ciently and accurately.
The notation should provide a uniform means for representing
accesses to memory, regardless of the declared shape of the data
structures in the source code.
To meet these requirements, we introduced a new notation, called
the Access Region Descriptor, which is detailed in the previous literature
[11]. The ARD is derived from the linear memory access descriptor
introduced in [13] and [14]. To avoid repetition, this section will only
brie
y discuss a few basics of the ARD necessary to describe our
dependence analysis technique in Section 5.
4.1. Representing the Array Accesses in a Loop Nest
If an array is declared as an m-dimensional array:
then referenced in the program with an array name followed by a list
of subscripting expressions in a nested loop, as in Figure 5,
d
Figure
5. An m-dimensional array reference in a d-loop nest.
then implicit in this notation is an array subscripting function Fm which
translates the array reference into a set of osets from a base address
in memory:
refers to the set of loop indices for the surrounding nested loops,
refers to a set of constants determined by the
rules of the programming language.
As the nested loop executes, each loop index i k moves through its
set of values, and the subscripting function Fm generates a sequence of
osets from the base address, which we call the subscripting oset
sequence:
The isolated eect of a single loop index on Fm is a sequence of
osets which can always be precisely represented in terms of
its starting value,
the expression representing the dierence between two successive
values, and
the total number of values in the sequence.
For example, consider even a non-a-ne subscript expression:
real A(0:*)
do
do
The subscripting oset sequence is:
I
The dierence between two successive values can be easily expressed.
To be clear, the dierence is dened to be the expression to be added
to the Ith member of the sequence to produce the I +1th member of the
sequence:
There are N members of the subscripting oset sequence, they start at
2, and the dierence between successive members is 2 I .
4.2. Components of an ARD
We refer to the subscripting oset sequence generated by an array
reference, due to a single loop index, as a dimension of the access.
We call this a dimension of an ARD.
DEFINITION 5. A dimension of an ARD is a representation of the
subscripting oset sequence for a set of memory references. It contains
a starting value, called the base oset
a dierence expression, called the stride, and
the number of values in the sequence, represented as a dimension
index, taking on all integer values between 0 and a dimension-
index bound value.
Notice that the access produced by an array reference in a nested
loop has as many dimensions as there are loops in the nest. Also, the
dimension index of each dimension may be thought of as a normalized
form of the actual loop index occuring in the program when the ARD
is originally constructed by the compiler from the program text.
In addition to the three expressions described above for an ARD
dimension, a span expression is maintained, where possible, for each
dimension. The span is dened as the dierence between the osets
of the last and rst elements in the dimension. The span is useful for
doing certain operations and simplications on the ARD (for instance
detecting internal overlap, as described in Section 4.4), however it is
only accurate when the subscript expressions for the array access are
monotonic.
A single base oset is stored for the whole access. An example of an
array access, its access pattern in memory, and its LMAD may be seen
in
Figure
6.
The ARD for the array access in Figure 5 is written as
A
d
with a series of d comma-separated strides (- 1 - d ) as superscripts to
the variable name and a series of d comma-separated spans ( 1 d )
as subscripts to the variable name, with a base oset () written to
the right of the descriptor. The dimension index is only included in the
written form of the LMAD if it is needed for clarity. In that case,
[index dimension-bound]
is written as a subscript to the appropriate stride.
4.3. Interprocedural Translation of an ARD
A useful property of the ARD is the ease with which it may be translated
across procedure boundaries. Translation of array access information
across procedure boundaries can be di-cult if the declaration
of a formal array parameter diers from the declaration of its corresponding
actual parameter. Array access representations which depend
A 3, 14, 26
. A(K+26*(I-1), J) .
DO K=1, 10, 3
END DO
END DO
END DO
REAL A(14, *)
Figure
6. A memory access diagram for the array A in a nested loop and the Access
Region Descriptor which represents it.
on the declared dimensionality of an array (as most do) are faced
with converting the representation from one dimensionality to another
when array reshaping occurs. This is not always possible without introducing
complicated mapping functions. This is the array reshaping
problem. Table V indicates that signicant array reshaping occurs in
many scientic applications, as published in [11].
Table
V. The gures in each entry indicate percentages of calls doing reshaping in
various benchmark programs from Perfect, SPEC and NASA. They were computed
from a static examination of the programs mentioned.
trfd arc2d tfft2 flo52 turb3d ocean mdg bdna tomcatv swim
We refer to a memory access representation that is independent
of the declared dimensionality of an array as a \universal" represen-
tation, because it becomes procedure independent and even language
independent. A universal representation eliminates the array reshaping
problem because it need not be translated to a new form (a potentially
dierent dimensionality) when moving to a dierent execution context.
The ARD is an example of a universal representation.
When a subroutine is called by reference, the base address of a formal
array parameter is set to be whatever address is passed in the actual
argument list. Any memory accesses which occur in the subroutine
would be represented in the calling routine in their ARD form, relative
to that base address. Whenever it is desired to translate the ARD for
a formal argument into the caller's context, we simply translate the
formal argument's variable name into the actual argument's name, and
add the base oset of the actual parameter to that of ARD for the
formal parameter. For example, if the actual argument in a Fortran
code is an indexed array, such as
call
then the oset from the beginning of the array A for the actual argument
is 2I. Suppose that the matching formal parameter in the subroutine
X is Z and the LMAD for the access to Z in X is
Z 10;200
When the access to Z, in subroutine X, is translated into the calling
routine, then the LMAD would be represented in terms of variable A
as follows:
A 10;200
which results from simply adding the oset after the renaming of Z to
A. Notice that A now has a two-dimensional access even though it is
declared to be one-dimensional.
4.4. Properties of ARDs
This subsection brie
y describes several basic properties of ARDs that
are useful for our dependence analysis based on access summary sets.
DEFINITION 6. Given an ARD with a set of stride/span pairs, we
call the sum of the spans of the rst k dimensions the k-dimensional
width of the ARD, dened by
4.4.0.1. Internal Overlap of an ARD The process of expanding an
ARD by a loop can cause overlap in the descriptor. For example, in the
following Fortran do-loop
do I=1,10
do J=1,5
. A(I*4+J) .
do
do
the ARD for A in the inner loop is A 1
1. When the ARD is
expanded for the outer loop, it becomes A 1;4
exhibits an
overlap due to the outer loop. This is because the access due to the
outer loop does not stride far enough to get beyond the array elements
already touched by the inner loop. This property may be determined
by noticing that the stride of the n-th dimension is not greater than
the dimensional width of the ARD.
4.4.1. Zero-span dimensions
A dimension whose span is zero adds no data elements to an access
pattern. This implies that whenever such a dimension appears in an
ARD (possibly through manipulation of the ARD), it may be safely
eliminated without changing the access pattern represented. Likewise,
it implies that at any time a new dimension may be introduced with any
desired stride and a zero-span, without changing the access pattern.
Simplication operations exist for eliminating dimensions within an
ARD, for eliminating ARDs which are found to be covered by other
ARDs, and for creating a single ARD which represents the accesses of
several ARDs. Since these operations are not needed for the exposition
of this paper, they will not be described here, but the reader is referred
to [11, 14, 13].
5. The Access Region Test
In this section, we rst describe the general dependence analysis method,
based on intersecting ARDs. The general method can detect data dependence
between any two arbitrary sections of code. Then, we show
a simplication of the general method, the Access Region Test, which
works for loop parallelization, and show a multi-dimensional, recursive
intersection algorithm for ARDs.
5.1. General Dependence Testing with Summary Sets
Given the symbolic summary sets RO 1 , WF 1 , and RW 1 (as discussed
in Section 3.4), representing the memory accesses for an earlier (in the
sequential execution of the program) dependence grain, and the sets
RO 2 , WF 2 , and RW 2 for a later grain, it can be discovered whether
any locations are accessed in both grains by nding the intersection of
the earlier and later sets, and by consulting Table IV. Any non-empty
intersection represents a dependence between grains. However, some of
those dependences may be removed by compiler transformations.
The intersections that must be done for each variable are:
If all of these intersections are empty for all variables, then no cross-iteration
dependences exist between the two dependence grains. If any
of the following are non-empty: RO 1
or RO 1 \RW 2 , then they represent dependences which can be removed
by privatizing the intersecting regions.
If RW 1 \RW 2 is non-empty, and all references involved are in either
induction form or reduction form, then the dependence may be removed
by induction or reduction transformations. This will be discussed in
more detail in Section 5.4.
If any of the other intersections: WF 1 \RO 2 , WF 1 \RW 2 , or RW 1 \
RO 2 are non-empty, then they represent non-removable dependences.
5.2. Loop Dependence Testing with the ART
The Access Region Test (ART) is used within the general framework of
Memory Classication Analysis, doing write-order summarization. This
means that the entire program is traversed in execution order, using
abstract interpretation, with summary sets being computed for the
nested contexts of the program and stored in ARDs. ARDs are used as
the semantic elements for the abstract interpretion. The interpretation
rules are exactly those rules described for the various program contexts
in Section 3.5.1. Whenever loops are encountered, the ART is applied to
the ARDs to determine whether the loops are parallel, or parallelizable
by removing dependences through compiler transformations.
As stated in Section 3.2, dependence testing between loop iterations
is a special case of general dependence testing, described in the last
section. Loop-based dependence testing considers a loop iteration to
be a dependence grain, meaning all dependence grains have the same
summary sets.
Once we expand the summary sets by the loop index (Section 3.5.1),
cross-iteration dependence can be noticed in three ways: within one
LMAD, between two LMADs of one summary set, or between two of
the summary sets.
5.2.1. Overlap within a Single ARD
Internal overlap due to expansion by a loop index is described in Section
4.4. When overlap occurs, it indicates a cross-iteration dependence.
This condition can be easily checked during expansion and
agged in
the ARD, so no other operation is required to detect this.
5.2.2. Intersection of ARDs within a Summary Set
Even though two LMADs in the same summary set do not intersect
initially, expansion by a loop index could cause them to intersect. Such
an intersection would represent a cross-iteration dependence. Such an
intersection within RO would be an input dependence, so this summary
set need not be checked.
Internal intersections for both WF and RW must be done, however.
In
Figure
7, for example, when the two writes to array A are rst
assigned to a summary set, they do not overlap. The two write-rst
ARDs are initially A 0
the base osets are
dierent, the intersection is assumed to not overlap (the conservative
assumption). This causes them to be separately assigned to the WF
set. After expansion for I, (and creation of the dimension index I 0 ),
the normalized ARDs both become A 1
do intersect, indicating
a dependence. This intersection would be found by attempting to
intersect the ARDs within WF.
do
do
Figure
7. Example illustrating the need for internal intersection of the summary
sets.
5.2.3. Intersection of Two Summary Sets
There are only three summary sets to consider in loop dependence
testing, instead of six (because there is only one dependence grain),
so there are only three intersections to try, instead of the eight required
in Section 5.1. After expansion by the loop index, the following
intersections must be done:
RO \ WF
RO \ RW
WF \ RW
An intersection between any pair of the sets RO, WF, and RW
involves at least one read and one write operation, implying a dependence
5.3. The Loop-based Access Region Test Algorithm
For each loop L in the program, and its summary sets RO, WF, and
RW, the ART does the following:
Expand the ARDs in all summary sets by the loop index of L.
Check for internal overlap, due to the loop index of L, of any
ARD in WF or RW. Any found within WF can be removed by
privatization. Any found in RW is removed if all references involved
are in either induction or reduction form. Once overlap for an ARD
is noted, its overlap
ag is reset.
Check for non-empty intersection between any pair of ARDs in WF
(removed by privatization) or RW (possibly removed by induction
or reduction).
For all possible pairs of summary sets, from the group RO, WF,
and RW, check for any non-empty intersection between two ARDs,
each pair containing ARDs from dierent sets. Any intersection
found here is noted as a dependence and moved to RW.
If no non-removable dependences are found, the loop may be declared
parallel. Wherever uncertainty occurs in this process, demand-driven
deeper analysis can be triggered in an attempt to remove the
uncertainty, or run-time tests can be generated.
5.4. Detecting Reduction and Induction Patterns
As stated in Section 2.3, idiom recognition is very important for parallelizing
programs. Inductions and reductions both involve an assignment
with a linear recurrence structure:
The forms dier slightly, as shown in the following (I represents an
integer variable and R represents a
oating point
Induction
oating point expression Reduction
Each of these patterns originally presents itself as a dependence, but
compiler transformations [15, 4] can remove the dependence.
Three levels of tests, done within the write-order summary scheme
structure, can be used to positively identify reductions and inductions.
5.4.0.1. Level 1 The rst test is for the linear recurrence structure
of the assignment statement. When the pattern is found, the ARD for
the statement is marked as passing the Level 1 test, plus the operator
the type of the (integer constant or
oating
point expression) are stored. The ARD is marked with an idiom type
of possible induction if it is an integer variable and the expression is an
integer constant. Otherwise it is marked as a possible reduction.
5.4.0.2. Level 2 During intersection of the ARDs of a particular variable
within the ReadWrite summary set (as part of the ART, described
in Section 5.2.2), if one is marked as having passed Level 1, then if any
other ARD in RW for the variable did not pass Level 1, with the same
idiom type and operator, it fails Level 2. If there are any ARDs for the
variable in either ReadOnly or WriteFirst and the ARD is a possible
reduction, then it fails Level 2. If the ARD is a possible induction and
any ARDs exist for the variable in WriteFirst, then it fails Level 2.
Otherwise the ARD passes Level 2.
5.4.0.3. Level 3 To pass Level 3, an ARD marked as passing Level
2 must be marked as having internal overlap due to expansion by the
loop index of an outer loop. This means that there is a dependence due
to the access, carried by the outer loop.
An ARD marked as having passed Level 3 can be considered an
idiom of the stored type, and appropriate code can be generated for it.
This three-level process will nd inductions and reductions interproce-
durally, because of the interprocedural nature of the ART.
5.5. Generality of the ART
The Access Region Test is a general, conservative dependence test.
By design, it can discern three types of dependence: input,
ow, and
memory-related. It cannot distinguish between anti and output depen-
dence, but that is because for our purposes it was considered unimportant
both types of dependence can be removed by privatization
transformations. For other purposes, the general MCA mechanism can
be used to formulate a mechanism, with the appropriate summary sets,
to produce the required information, much as data
ow analysis can
be formulated to solve various data
ow problems. In Sections 3.1 and
3.2, we showed two dierent formulations of MCA for doing dependence
analysis. The read-only summary scheme is simply a dependence test,
while the write-order summary scheme provides enough information
to test for dependence, and also remove some dependences through
compiler transformations.
5.6. Loop-carried Dependence Handled by the ART
Any dependence within an inner loop is essentially ignored with respect
to an outer loop because of the fact that after expansion by a loop
index, any intersecting portions of two ARDs are represented as a single
ARD and moved to the RW summary set. If there are intersecting
portions, they are counted as cross-iteration dependences for that loop,
but because they are reduced to a single ARD, no longer will be found to
intersect for outer loops. Intersections due to outer loops will be solely
due to expansions for outer loop indices. This process is illustrated in
Figure
8.
RO:
A I, J
END DO
END DO
WF
RO
dependence
RO WF
100(50-2)A 100(50-2)
expand I
expand I
expand I
50-1, 100(50-3)
I loop : no intersection indicates independence
RO RW WF
Figure
8. How the ART handles loop-carried dependence.
5.7. A Multi-Dimensional Recursive Intersection
Algorithm
Intersecting two arbitrary ARDs is very complex and probably in-
tractable. But if the two ARDs to be compared have the same strides
(we call these stride-equivalent), or the strides of one are a subset of
the strides of the other (we call these semi-stride-equivalent), which has
been quite often true in our experiments, then they are similar enough
to make the intersection algorithm tractable. We present Algorithm
Intersect in Figure 9.
Input: Two ARDs, with properly nested, sorted dimensions :
d
(such that 0 ),
k: the number of the dimension to work on (0 k d)
Output:List of ARDs
returns ARD List
if (D == 0 ) then
ARD rlist1 ARD scalar()
add to list(ARD List; ARD rlist1 )
endif
return ARD List
endif
c
// periodic intersection on the left
remove dim(ARD right ; k; 0
add to list(ARD List; ARD rlist1 )
(R
intersection on the right
remove dim(ARD left ; k;
add to list(ARD List; ARD rlist2 )
endif
else
// intersection at the end
remove dim(ARD right ; k; 0
add to list(ARD List; ARD rlist1 )
endif
return ARD List
intersect
remove dim(ARD in ; k; new
// Construct and return a new ARD equivalent to ARD in ,
// except without access dimension k and with new as the new base oset.
// Construct and return a new access dimension with stride - and span .
add dim(ARD in ; dimnew ; new
// Construct and return a new ARD equivalent to ARD in ,
// except with new dimension dimnew and with new as the new base oset.
add to list(ARD list, ARD) f
// Add ARD to the list of ARDs ARD list.
Figure
9. The algorithm for nding the intersection of multi-dimensional ARDs.
For clarity, the removal of the intersection from the two input ARDs, the use of the
conservative direction
ag, and the use of the execution predicates are not shown,
although all these things can be added to the algorithm in a straightforward way.
The algorithm accepts two stride-equivalent ARDs. If the two ARDs
are semi-stride-equivalent, then zero-span dimension(s) can be safely
inserted into the ARD with fewer dimensions (as discussed in Section
4.4.1) to make them stride-equivalent. The algorithm is also passed
a conservative direction
ag. The
ag has two possible values: under-estimation
and over-estimation, which tells the algorithm what
to do when the result is imprecise. For over-estimation, the result is
enlarged to its maximum value and likewise, under-estimation causes
the result to be reduced to its minimum value. If two ARDs are to be
intersected and they are not stride-equivalent, then the result is formed,
based on the conservative direction.
The algorithm takes as input two ARDs which have all dimensions
precisely sorted, ARD
d
and the number of the dimension, d, to work on. ARD left has a base
oset which is less than that of ARD right .
The algorithm compares the overall extent of dimension d for each
ARD, as shown in Figure 10(A). If the extents do not overlap in any
way, it can safely report that the intersection is empty. If they do
overlap, then the algorithm calls itself recursively, specifying the next
inner dimension, as shown in Figure 10(B).
Intersection
Intersection
A
Figure
10. The multi-dimensional recursive intersection algorithm, considering the
whole extent of the two access patterns (A), then recursing inside to consider the
next inner dimension (B).
This process continues until it can either be determined that no overlap
occurs, or until the inner-most dimension is reached, as shown in
Figure
11, where it can make the nal determination as to whether there
is an intersection between the two, considering only one-dimensional
accesses. The resulting ARD for the intersection is returned, and as
each recursion returns, a dimension is added to the resulting ARD.
Figure
11. The multi-dimensional recursive intersection algorithm, considering the
inner-most dimension, nding no intersection.
For simplicity, in this description, it is assumed that the two ARDs
have dimensions which are fully sorted, so that dimension i of one ARD
corresponds to dimension i of the other, and that - d > - d 1 > > - 1 .
6. Experiments
The Access Region Test has an advantage over other tests discussed in
Section 1, in three ways:
Reducing dependence analysis to an intersection operation does
not restrict the ART from handling certain types of subscripting
expressions, such as coupled subscripts, which are a problem for
the Range Test, and non-a-ne expressions, which are a problem
for most other tests.
Use of the ARD provides precise access summaries for all array
subscripting expressions.
The test is implicitly interprocedural since ARDs may be translated
precisely across procedure boundaries.
To separate the value of the ART from the value of the ARD, it is
instructive to consider the question of whether other dependence tests
might be as powerful as the ART if they represented memory accesses
with the ARD notation. The answer to this question is \no".
Take as an example the Omega Test. The mechanism of the Omega
Test is only dened for a-ne expressions. The user of the Omega Test
must extract the linear coe-cients of the loop-variant values (loop
indices, etc), plus provide a set of constraints on the loop-variants.
The ARDs partially ll the role of the constraints, but if non-a-ne
expressions are used, there is no way to extract linear coe-cients for
the non-a-ne parts. A technique for replacing the non-a-ne parts of an
expression with uninterpreted function symbols has been developed [17],
but it is not general enough to work in all situations. So even using the
ARD, the Omega Test could not handle non-a-ne subscript expressions
because its mechanism is simply not well-dened for such expressions.
Likewise, if the Range Test were to use the ARD to represent value
ranges for variables, that still would not change its basic mechanism,
which makes it a single-subscript test, unable to handle coupled-subscripts.
The mechanism of the Range Test forces it to consider the behavior of
the subscript expression due to a single subscript at a time, whereas
the ART compares access patterns instead of subscript expressions.
A simple example in Figure 12 shows the advantage of comparing
the patterns. It shows two loop nests which display identical access
patterns, yet dierent subscripting expressions. The top accesses can
be determined independent by the Range Test, but the bottom accesses
cannot.
do
do J=1, M
do
do
do
do J=1, M
do
do
I
I
I
Figure
12. The Range Test can determine the accesses of the top loop to be inde-
pendent, but not those of the bottom loop. The ART can nd both independent,
since it deals with access patterns instead of just subscript expressions.
Figure
13 shows another example, from the tfft2 benchmark code,
which neither the Omega Test nor the Range Test can nd independent
REAL U(1), X(1), Y(1)
DO I=0,2*(M/2)-1
U(1+3*2*(1+M)/2),
END DO
REAL U(1), X(1), Y(1)
DO L0=1, (M+1)/2
END DO
REAL U(*), X(*),
DO I=0,2*(M-L)-1
DO K=0,2*(L-1)-1
END DO
END DO
Figure
13. A simplied excerpt from the benchmark program tt2, which the ART
can determine to be free of dependences.
due to the apparent complexity of the non-a-ne expressions involved,
yet the ART can nd them independent interprocedurally at the top-most
loop, due to its reliance on the simple intersection operation,
its ability to translate ARDs across procedure boundaries, and the
powerful ARD simplication operations which expose the simple access
patterns hidden inside complex subscript expressions.
As we continued to develop the ART, we needed to evaluate the ART
on real programs. Therefore, we implemented a preliminary version of
the ART in Polaris [4], a parallelizing compiler developed at Illinois,
and experimented with ten benchmark codes. In these experiments, it
was observed that the ART is highly eective for programs with complex
subscripting expressions, such as ocean, bdna, and tfft2. Table VI
shows a summary of the experimental results that were obtained at the
time we prepared this paper. Careful analytical study conrms that
the ART theoretically subsumes the Range Test. This implies that the
ART can parallelize all the loops that the Range Test can, even though
in this experiment, the ART failed to parallelize a few loops in flo52
and arc2d due to several implementation dependent problems reported
in [11].
The numbers of loops additionally parallelized by the ART are small,
but some of these loops are time-critical loops which contain complex
array subscripting expressions. Our previous experiments reported
in [13] also showed that the ART applied by hand increased the parallel
speedup for tfft2 by factor of 7.4 on the Cray T3D 64 processors.
As can be expected, Table VI shows that neither the ART, the
Omega Test, nor the Range Test make a dierence in the performance
for the codes with only simple array subscripting expressions, such as
tomcatv, arc2d and swim.
Table
VI. A comparison of the number of loops parallelized by a current version of
the ART with other techniques. The rst line shows the number of loops that the
ART could parallelize and the Range Test could not. The second shows the number
of loops that the Range Test could parallelize and the Omega Test could not. The
third shows the number of loops that the Omega Test could parallelize and the
Range Test could not. All other loops in the codes were parallelized identically by
all tests. The data in the second and third lines are based on the previous work
on Polaris.
tfft2 trfd mdg flo52 hydro2d bdna arc2d tomcatv swim ocean
Previous techniques based on access summaries did not show experimental
results with real programs in their papers [1, 18]. Thus, it is
not possible for us to determine how eective their techniques would
be for actual programs.
7. Conclusion and Future Work
This paper presents a technique for unifying interprocedural dependence
analysis, privatization and idiom recognition in a single frame-
work. This technique eliminates some of the limitations which encumber
the loop-based, linear system-solving data dependence paradigm,
and expands the notion of a dependence test to include a way of classifying
the dependences found, so that a compiler can eliminate them
using code transformations.
The framework is built on a general scheme for classifying memory
locations (Memory Classication Analysis), based on the order and
type of accesses to them. This framework can be reformulated and
used for many purposes. The read-only and write-order summarization
schemes were presented, but many other schemes are possible for a
variety of purposes.
The multi-dimensional, recursive intersection algorithm for ARDs
was introduced. It allows us to calculate a precise intersection between
two stride-equivalent ARDs. This algorithm forms the core of
the dependence analysis calculation. Heuristics can be added to this
algorithm to handle cases in which the ARDs are not stride-equivalent.
The more precise this intersection algorithm becomes, the more precise
data dependence analysis becomes.
We believe that the
exibility and generality aorded by this reformulation
of data dependence will make it very useful for many purposes
within a compiler. In the future, we intend to use the MCA framework
for other analyses, which will automatically extend them interprocedu-
rally. In addition, we intend to formalize our methods by an analysis in
terms of the abstract semantic elements and rules within the abstract
interpretation framework.
--R
A Technique for Summarizing Data Access and its Use in Parallelism Enhancing Transformations.
Dependence Analysis.
Symbolic Analysis Techniques for E
Parallel Programming with Polaris.
Semantic foundations of program analysis.
Abstract interpretation: A uni
Interprocedural Array Region Analyses.
On the Automatic Parallelization of the Perfect Benchmarks.
An Implementation of Interprocedural Bounded Regular Section Analysis.
Automatic Parallelization for Distributed Memory Machines Based on Access Region Analysis.
Induction Variable Substitution and Reduction Recognition in the Polaris Parallelizing Compiler.
A Practical Algorithm for Exact Array Dependence Analysis.
Nonlinear Array Dependence Analysis.
Exact Side E
Gated SSA-Based Demand-Driven Symbolic Analysis for Parallelizing Compilers
High Performance Compilers for Parallel Computing.
The Power Test for Data Dependence.
--TR
Interprocedural dependence analysis and parallelization
A technique for summarizing data access and its use in parallelism enhancing transformations
Practical dependence testing
A practical algorithm for exact array dependence analysis
Exact side effects for interprocedural dependence analysis
Nonlinear array dependence analysis
Gated SSA-based demand-driven symbolic analysis for parallelizing compilers
On the Automatic Parallelization of the Perfect BenchmarksMYAMPERSAND#174
Simplification of array access patterns for compiler optimizations
Nonlinear and Symbolic Data Dependence Testing
Abstract interpretation
Dependence Analysis
Parallel Programming with Polaris
An Efficient Data Dependence Analysis for Parallelizing Compilers
An Implementation of Interprocedural Bounded Regular Section Analysis
The Power Test for Data Dependence
Interprocedural Array Region Analyses
Symbolic analysis techniques for effective automatic parallelization
Interprocedural parallelization using memory classification analysis
--CTR
Y. Paek , A. Navarro , E. Zapata , J. Hoeflinger , D. Padua, An Advanced Compiler Framework for Non-Cache-Coherent Multiprocessors, IEEE Transactions on Parallel and Distributed Systems, v.13 n.3, p.241-259, March 2002
Thi Viet Nga Nguyen , Franois Irigoin, Efficient and effective array bound checking, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.3, p.527-570, May 2005 | privatization;dependence analysis;parallelization;compiler |
608747 | Path Analysis and Renaming for Predicated Instruction Scheduling. | Increases in instruction level parallelism are needed to exploit the potential parallelism available in future wide issue architectures. Predicated execution is an architectural mechanism that increases instruction level parallelism by removing branches and allowing simultaneous execution of multiple paths of control, only committing instructions from the correct path. In order for the compiler to expose and use such parallelism, traditional compiler data-flow and path analysis needs to be extended to predicated code. In this paper, we motivate the need for renaming and for predicates that reflect path information. We present Predicated Static Single Assignment (PSSA) which uses renaming and introduces Full -Path Predicates to remove false dependences and enable aggressive predicated optimization and instruction scheduling. We demonstrate the usefulness of PSSA for Predicated Speculation and Control Height Reduction. These two predicated code optimizations used during instruction scheduling reduce the dependence length of the critical paths through a predicated region. Our results show that using PSSA to enable speculation and control height reduction reduces execution time from 12 to 68%. | Introduction
The Explicitly Parallel Instruction Computing (EPIC) architecture has been put forth as a viable architecture
for achieving the instruction level parallelism (ILP) needed to keep increasing future processor
performance [8, 17]. Intel's application of EPIC architecture technology can be found in their IA-64 architecture
whose first instantiation is the Itanium processor [1] An EPIC architecture issues wide instructions,
similar to a VLIW architecture, where each instruction contains many operations.
One of the new features of the EPIC architecture is its support for predicated execution [24], where
each operation is guarded by one of the predicate registers available in the architecture. An operation is
committed only if the value of its guarding predicate is true.
One advantage of predicated execution is that it can eliminate hard-to-predict branches by combining
both paths of a branch into a single path. Another advantage comes from using predication to combine
several smaller basic blocks into one larger hyperblock [22]. This provides a larger pool from which to draw
ILP for EPIC architectures.
A significant limitation to ILP is the presence of control-flow and data-flow dependences. Static Single
Assignment (SSA) is an important compiler transformation used to remove false data dependences across
basic block boundaries in a control flow graph [12]. Removing these false dependences reveals more ILP,
allowing better performance of optimizations like instruction scheduling. Without performing SSA, the
benefit of many optimizations on traditional code is limited.
Eliminating false dependences is equally important and a more complex task for predicated code, since
multiple control paths are merged into a single predicated region. However, the control-flow and data-flow
analysis needed to support predicated compilation is different than traditional analysis used in compilers for
superscalar architectures. A sequential region of predicated code contains not only data dependences, but
also predicate dependences. A predicate dependence exists between every operation and the definition(s)
of its guarding predicate. Our technique introduces a chain of predicate dependences which represents a
unique control path through the original code.
We describe a predicate-sensitive implementation of SSA called Predicated Static Single Assignment
(PSSA). PSSA introduces Full-Path Predicates to extend SSA to handle predicate dependences and the
multiple control paths that are merged together in a single predicated region. We demonstrate that PSSA
allows effective predicated scheduling by (1) eliminating false dependences along paths via renaming, (2)
creating full-path predicates, and (3) providing path-sensitive data-flow analysis. We show the benefit of
using PSSA to perform Predicated Speculation and Control Height Reduction during instruction schedul-
b>a
b=rand() if true // b=random number
P2,P3 cmpp.un.uc b>a if true // if b>a then P2=true,P3=false
else P2=false, P3=true
b=q if P2 // if P2 is true, b=q else nullify
d=b+3 if P3 // if P3 is true, d=b+3 else
nullify statement
f=b*2 if true // f=b*2
a) Original Control Flow Graph b) Predicated Hyperblock
Figure
1: Short code example showing the transformation from non-predicated code to predicated hyperblock
ing. Using PSSA allows these two optimizations, when applied together, to schedule all operations at their
earliest schedulable cycle. In our implementation, the earliest schedulable cycle takes into consideration
true data dependences and load/store constraints. In this paper we expand upon work we presented in [11]
by including additional benchmarks and by motivating the need for renaming and for predicates that reflect
path information above and beyond what is available from traditional If-converted code.
The paper is organized as follows. Section 2 describes predicated execution. Section 3 motivates
the need for predicate-sensitive analysis and full-path predicates. Section 4 presents Predicated Static
Single Assignment. Section 5 shows how PSSA can enable aggressive Predicated Speculation and Control
Height Reduction. Section 6 reports the increased ILP and reduced execution times achieved by applying
our algorithms to predicated code. Section 7 summarizes related work. Section 8 discusses using PSSA
within the IA-64 framework, and Section 9 describes our future work. Finally, Section 10 summarizes the
contributions of this paper.
Predicated Execution
Predicated execution is a feature designed to increase ILP and remove hard-to-predict branches. It has
also been used to support software pipelining[14, 25]. Machines with hardware to support predicated code
include an additional set of registers called predicate registers. The process of predication replaces branches
with compare operations that set predicate registers to either true or false based on the comparison in the
original branch. Each operation is then associated with one of these predicate registers which will hold the
value of the operation's guarding predicate. The operation will be committed only if its guarding predicate
is true 1 . This process of replacing branches with compare operations and associating operations with a
predicate defined by that compare is called If-Conversion [5, 24].
Our work uses the notion of a hyperblock [22]. A hyperblock is a predicated region of code consisting
of a straight-line sequence of instructions with a single entry point and possibly multiple exit points.
Branches with both targets in the hyperblock are eliminated and converted to predicate definitions using
If-conversion. All remaining branches have targets outside the hyperblock. Consequently, there are no cyclic
control-flow or data-flow dependences within the hyperblock. The selection of instructions to be included
in the hyperblock is based on program profiling of the original basic blocks which includes information
such as execution frequency, basic block size, operation latencies, and other characteristics [22].
A typical code section to include in a hyperblock is one that contains a hard-to-predict (unbiased)
branch [21], as shown in Figure 1. After If-conversion, the Control Flow Graph (CFG) in Figure 1(a),
which is comprised of four basic blocks, results in the predicated hyperblock shown in Figure 1(b). All
operations in the hyperblock are now guarded, either by a predicate register set to the constant value of
true, or by a register that can be defined as either true or false by a cmpp (compare and put (result) in
predicate) operation. Operations guarded by the constant true, such as the operation f=b*2 in Figure 1,
will be executed and committed regardless of the path taken. Operations guarded by a predicate register,
such as the operation b=q, will be put into the pipeline, but only committed if the value of the operation's
guarding predicate (P2 for this operation) is determined to be true.
In what follows, we describe three types of operations that can be included in a hyperblock - cmpp
operations, the predicate OR operation, and normal (non-predicate-defining) operations.
As defined in the Trimaran System [2] (which supports EPIC computing via the Playdoh ISA [19]),
guarding predicates are assigned their values via cmpp operations [8]. Consider an operation
B,C cmpp.un.ac a?c if A as an example. The cmpp operation can define one or two predicates. This
operation will define predicates B and C. The first tag (.un) applies to the definition of the first predicate
B and the second tag (.ac) to C. The first character of a tag defines how the predicate is to be defined.
The character u means that the predicate will unconditionally get a value, whether the guarding predicate
in this case) is true or false. If A is false, then B is set to false. Otherwise, A is true and the value of B
depends upon the evaluation of a?c.
The character a in the second tag (.ac) indicates that the full definition of the related predicate C
is contingent on the value of A, the evaluation of a?c, AND the prior value of C. If A is false, the value
of predicate C does not change. If A is true and C has previously been set false then C remains false.
One exception is the unconditional definition of a predicate. This is discussed later in the section.
Additionally, the second character of a tag defines whether the normal (n) result of the condition (a?c) or
the complement (c) of the condition must be true to make the related predicate true. If A is true and C is
true and !(a?c) is true then the new value of C will be true 2 . For a complete definition of cmpp statements
see the Playdoh architecture specification [19].
In our implementation of PSSA, we introduce a new OR operation currently not defined by Trimaran.
The predicate OR operation defines block predicates by taking the logical OR of multiple predicates. For
example, consider an operation G = OR(A, B, C) if true (where A, B and C are predicates, each defining
a unique path to G). If any one of them has the value of true, G will receive a value of true, otherwise G
will be assigned false.
When scheduling, we make the reasonable assumption that the definition of a predicate is available for
use as a source for another operation, or as a guard to a subsequent cmpp operation in the cycle following
its definition. When used as a guard for all other operations, the predicate definition is available for use
in the same cycle as it is defined.
We refer to all other operations, which do not define predicates, as normal operations. Normal operations
include assignments, arithmetic operations, branches, and memory operations.
3 Motivation for Predicate-Sensitive Analysis
A major task for the scheduler of a multi-issue machine is to find independent instructions. Unfortu-
nately, predication introduces additional dependences that traditional code doesn't have to consider. In
Figure
1(b), there is a dependence between the definition of the guarding predicate P2 and its use in the
statement b=q if P2. Since predication combines multiple basic blocks, it introduces false dependences
between disjoint paths. For example, in Figure 1(b), in the absence of predicate dependence information,
we would infer a dependence between the definition of b in b=q if P2 and the use of b in d=b+3 if P3.
However, these two statements are guarded by disjoint predicates. Therefore, only one of the predicates
(P2 or P3) can possibly be true; only one of the statements will actually be committed and no dependence
does in fact exist.
Johnson et. al. [18] devised a scheme to determine the disjointness of predicates using the predicate
partition graph. This analysis allowed more effective register allocation as live ranges across predicated
code could be more accurately determined [15]. Their approach was limited to describing disjointness
with restricted path information. Path information that extended across join points was not collected. In
Conversely, if A is true and C is true and !(a?c) is false, then the new value of C will be false.
Figure
2, the predicate partition graph would determine that the following pairs of predicates are disjoint:
G and H, B and C, D and !D. However, no information regarding the relationship between D and G or D and
H would be available.
This "cross-join" information is needed to provide the scheduler full flexibility in scheduling statements
such as y=t+r. If path information is not available, then y=t+r is guarded on true and the scheduler
correctly assumes this statement is dependent on t=rand(), t=t-s, r=5+x, and r=x+8. However, since
there are two possible definitions of each operand, there are 4 combinations of operands that could in fact
cause the definition of y - each executable via one (or more) paths of execution through the region. If
4 versions of this statement could be made (one for each combination of the operands), then each could
be scheduled at the minimum dependence length for that version. While disjointness information can
maintain information regarding paths since the most recent join, we will need to combine path information
across joins to remove unnecessarily conservative scheduling dependences. Figure 2(b) shows the cross-join
path information that would be needed to guard each assignment of y so that the scheduler can know the
precise dependences for each copy. This will allow the most flexibility in scheduling each statement.
Although precise dependence information can be determined from guarding predicate relations, we will
also show that renaming techniques can be of additional use to achieve greater scheduling flexibility. By
renaming variables that have more than one definition in a region, we will maintain path information even
after optimizations which change the guarding predicate of a statement have been applied.
4 Predicated Static Single Assignment (PSSA)
Techniques such as renaming [4] and Static Single Assignment (SSA) [13, 12] have proved useful in eliminating
false dependences in traditional code [31]. Removing false dependences allows more flexibility in
scheduling since data independent operations can move past each other during instruction scheduling.
In non-predicated code, SSA assigns each target of an assignment operation a unique variable. At
join nodes a OE-function may need to be inserted if multiple definitions of a variable reach the join. The
OE-functions determine which version of the variable to use and assign it to an additional renamed version.
This new variable is used to represent the merging of the different variable names. Figure 3 shows the
simple example from Figure 1 in SSA form. In the assignment b3-?OE(b1,b2), the variable b3 represents
the reaching definition of b which is to be used after the join of definition b1 or b2.
As discussed in section 3, eliminating false dependences is equally important and a more complex task
for predicated code, since multiple control paths are merged. To address this problem we developed a
t>r
t=t-s
(D)
z=w+5 if true
r=5+x if G
B,C t>r if G
B,C t>r if H
L, w>2 if B
t=t-s if D
br out if L
y=t+r if !D&G
y=t+r if !D&H
y=t+r if D&G
y=t+r if D&H
v=y+5 if true
(a) Original Control Flow Graph (b) Predicated Hyperblock with Paths
First join
block
Second join
block
Third join
block
Br out
Figure
2: Code is duplicated when more than one definition reaches a use to maintain maximum flexibility
for the scheduler. In (b), the statement y=t+r is duplicated for each pair of definitions that may reach
this statement. Each copy is guarded by the predicates that defined the path along which those definitions
would occur.
predicate-sensitive implementation of SSA called Predicated Static Single Assignment (PSSA).
PSSA seeks to accomplish the same objectives as SSA for a predicated hyperblock. First, it must
assign each target of an assignment operation in the hyperblock a unique variable. Second, at points in
the hyperblock where multiple paths come together it must summarize under what conditions each of the
multiple definitions of a variable reaches that join. The second objective is accomplished through the
creation of full-path predicates and path-sensitive analysis.
Consider the sample predicated code shown in Figure 4 using traditional hyperblock predication [22].
b>a
(a) Control Flow Graph (b) Code in SSA form
if b1>a
else
d1=b1+3
Figure
3: Static Single Assignment
Br out
z=w+5 if true
r=5+x if G
B,C t>r if true
L, w>2 if B
t=t-s if D
br out if L
y=t+r if true
v=y+5 if true
(a) Original Control Flow Graph (b) Predicated Hyperblock
t>r
t=t-s
(D)
Figure
4: Extended example of transformation from non-predicated CFG to predicated hyperblock
t2=t1-s
F=true if true 1
AGF,AHF cmpp.un.uc z1>7 if F 3
r1=5+x if AGF 3
BAGF,CAGF cmpp.un.uc t1>r1 if AGF 5
BAHF,CAHF cmpp.un.uc t1>r2 if AHF 5
LBAGF,EBAGF cmpp.un.uc w1>2 if BAGF 6
LBAHF,EBAHF cmpp.un.uc w1>2 if BAHF 6
ECAGF, EDCAGF cmpp.un.uc t1>7 if CAGF 6
ECAHF, EDCAHF cmpp.un.uc t1>7 if CAHF 6
t2=t1-s if D 7
br out if L 7
(AGF) (AHF)
t1= rand ()
Br out
(BAGF) (CAGF) (BAHF) (CAHF)
(AGF)
(AHF)
(D)
(a) PSSA dependence graph (b) PSSA-transformed code
Figure
5: The PSSA dependence graph shows the flow of data and control through the PSSA-transformed
code. Blocks labeled with full-path predicates (indicated by multiple letters) contain statements that are
only executed along that path. Blocks labeled with block predicates (single letters) contain statements that
will be executed along several paths.
In this predicated example, all branches have been replaced (except the one leaving the hyperblock) with
predicate-defining operations using If-conversion. The predicates that are defined in this example correspond
to the two edges exiting each conditional branch in the CFG in Figure 4. Figure 5 shows this example
after PSSA has been applied and displays a graph showing the post-PSSA dependence relationships.
The PSSA transformation has 2 phases: pre- and post-optimization. Hyperblocks are converted to
PSSA form before optimization. After optimization, PSSA inserts clean-up code on edges leaving the
hyperblock, copying renamed variables back to their original names and then removes any unused predicate
definitions.
4.1 Converting to PSSA Form
When converting to PSSA form, each operation is processed in turn beginning at the top of the hyperblock
and proceeding to the end. Control PSSA is applied to predicate-defining operations, and Normal PSSA
is applied to all other operations.
We first describe Normal PSSA. If the operation is an assignment, the variable defined is renamed. The
third operation in Figure 5(b), z1=w1+5, is an example. All operands are adjusted to reflect previously
renamed variables (e.g. w becomes w1). If the operation is part of a join block, multiple versions of the
operands may be live. The first operation (y=t+r) in the third join block of Figure 2(a) provides an
example. Here, the operation will be duplicated for each path leading to the join and the correct operand
versions for each path will be used in the duplicate statement as seen in Figure 5 (in the multiple definitions
of y1). The duplicates are guarded by the full-path predicate (described below) associated with the path
along which the operands are defined. Though there are 6 definitions of y1 (only 4 are unique), there is
only one definition of y1 on any given path. These definitions are predicated on disjoint predicates; only
one of them can possibly be true, and only one of them will be committed.
We next describe Control PSSA. The single cmpp operation that defined one or two block predicates
(such as the definitions of B and C in Figure 4) is replaced by one or more cmpp operations, each associated
with a particular path leading to that block. As can be seen in Figure 5(b) there are now two cmpp
operations: one defining BAGF and CAGF, and one defining BAHF and CAHF. These new predicates are called
full-path predicates (FPPs). Each FPP definition has the appropriate operand versions for its path and
each is guarded by the FPP that defined the path prior to reaching the new block. For example, the cmpp
defining BAGF and CAGF is predicated on AGF. A FPP specifies the unique path along which an operation
is valid for execution, enabling PSSA to provide correct guarding predicates for the duplicate statements
previously described.
In the example in Figure 2 we pointed out that the definitions of y1 needed guarding predicates that
captured information about paths of execution. The first definition of y1 needed to be guarded by a
predicate representing a path of execution through block G but not block D. In addition, the predicate
needs to reflect that the execution actually reached the block of the statement in question (E in this case).
Register y1 would be incorrectly modified if, for example, the branch out of the hyperblock is taken and
block E is never reached. The new FPP EBAGF represents the precise conditions for correct execution.
In addition to the cmpp statements added to define FPPs, cmpp statements are included to rename
join blocks whose statements were originally predicated on true. A and E and their associated FPPs are
examples. The operations in Figure 4(b) predicated on true, are predicated on F, A and E in the PSSA
version of the code shown in Figure 5. This is necessary to maintain exact path information.
Clearly, this has the potential to cause an exponential amount of code duplication. It might seem more
reasonable to follow the example of SSA and insert OE-functions at join points to resolve multiple definitions.
For example, an implementation of OE-functions resolving r and t in the definition of y1 could be:
(1) r=r1 if G
(3) t=t1 if true
While this would have the advantage of decreasing duplication, it does not eliminate the need for
predicate-sensitive analysis. Predicate relationship information is still needed to determine the reaching
definitions and associated predicates, and to determine the order of the copy operations. For example,
both of the statements (3) and (4) defining t in the previous sequence could be committed. The literal
predicate true is always true, and predicate D could be true as well. For the use of t in (5) to get the
correct definition, statement (4) cannot be executed before statement (3). Moreover, other side effects
that degrade performance are introduced. Most important is that the insertion of OE-functions adds data
dependences. For example, a true dependence is introduced between the definition of t1 and its use in (3).
In addition, false dependences are re-introduced. An example is the output dependence between the two
definitions of t. Thus, SSA and the usual OE-function implementation does not give the desired scheduling
flexibility.
Block predicates are also important to the PSSA transformation. PSSA uses predicate OR statements to
redefine the block predicates as the union of the FPPs associated with the paths that reach the block. PSSA
does not simply duplicate every path through the hyperblock. Duplication only occurs when necessary to
remove false dependences. When there is only one version of all operands reaching a statement, only one
version of the statement is required. This is the case with v1=y1+5 in Figure 5. The variable y1 is the
only version live in node E. This statement is guarded by E, a block predicate created by taking the logical
of EBAGF, EBAHF, ECAGF, ECAHF, EDCAGF, and EDCAHF. As long as control reaches node E,
regardless of the path taken, we will execute and commit the statement v1=y1+5.
4.2 Post-Optimization Clean-up
After optimization is applied to code in PSSA form, a clean-up phase is run to remove unnecessary code
and to assure consistent code outside of the hyperblock.
The PSSA implementation described in this paper generates cmpp statements for every path and block.
These are entered into the PSSA data structure that maintains information about the relationships between
the predicates they define, which provides maximum flexibility during optimization. However, some of these
FPP definitions may not be used, and the corresponding cmpp operations will be discarded, reducing the
code size significantly.
Finally, to assure correct execution following the hyperblock, PSSA inserts copy operations assigning
the original variable names to all renamed definitions that are live out of the hyperblock. These are placed
on the appropriate exit of the hyperblock. For example, the exit branch guarded by L in Figure 4 would
include live out of the hyperblock at this exit.
5 Hyperblock Scheduling Optimizations
In this section, we describe how PSSA enables Predicated Speculation (PSpec) and Control Height Reduction
(CHR) for aggressive instruction scheduling. PSpec allows operations to be executed before their
guarding predicates are determined and CHR allows the guarding predicates to be determined as soon
as possible, reducing the number of operations that need to be speculated. Used together with PSSA,
we demonstrate that we can schedule the code at its earliest schedulable cycle, assuming a machine with
unlimited resources.
F=true if true 1
AGF,AHF cmpp.un.uc z1>7 if F 3
BAGF,CAGF cmpp.un.uc t1>r1 if AGF 4
BAHF,CAHF cmpp.un.uc t1>r2 if AHF 4
LBAGF,EBAGF cmpp.un.uc w1>2 if BAGF 5
LBAHF,EBAHF cmpp.un.uc w1>2 if BAHF 5
ECAGF, EDCAGF cmpp.un.uc t1>7 if CAGF 5
ECAHF, EDCAHF cmpp.un.uc t1>7 if CAHF 5
t2=t1-s if true 2
br out if LBAGF 5
br out if LBAHF 5
Figure
Extended code example after PSpec optimization has been applied. Statements (other than first
statement) predicated on true have been speculated.
5.1 Predicated Speculation
This section describes how to perform speculation on PSSA-transformed code. In general, speculation is
used to relieve constraints which control dependences place on scheduling. One can speculatively execute
operations from the likely-taken path of a highly-predictable branch, by scheduling those operations before
their controlling branch [20]. Similarly, Predicated Speculation (PSpec) will schedule a normal operation
above the cmpp operation it is dependent upon, optimizing a hyperblock's execution time.
PSpec handles placement of the speculated predicated operation in a uniform manner. PSpec schedules
a normal operation at its earliest schedulable cycle. When speculating an operation, the operation is
scheduled earlier than the operation it is control dependent on, and is predicated on true. We assume that
any exceptions raised by the speculated operations will be taken care of using architecture features such
as poison bits [7].
PSpec(normal op)
f
if (normal op.guarding predicate not defined by
normal op.earliest schedulable cycle)
f
if (multiple defs of normal op.target exist
f
rename(normal op.target);
update uses(normal op.target);
normal op.schedule(earliest schedulable cycle);
normal op.set predicate(true);
else
f
normal op.schedule(earliest schedulable cycle);
Figure
7: Basic PSpec Algorithm.
5.1.1 Instruction Scheduling with Speculation
To demonstrate the usefulness of PSSA in enabling PSpec, Figure 6 shows the code from Figure 5 after
the PSpec optimization has been applied. The assignments to r1 and r2 are examples of speculated
operations. Notice that based on dependences, they could both be scheduled at cycle one which would
have been impossible without renaming.
During predicated speculation, each operation is considered sequentially, beginning with the first instruction
in the hyperblock. If it is a normal, non-store operation, PSpec compares its earliest schedulable
cycle with the cycle in which its guarding predicate is currently defined. If the operation can be scheduled
earlier than its guarding predicate, the operation is predicated on true and scheduled at its earliest
schedulable cycle.
Recall that PSSA has not performed full renaming, so further renaming may be required by PSpec. An
example is the definition of y1 in Figure 5. If we speculate any of the definitions of y1 by predicating them
on true without renaming, incorrect code can result. Consequently, we must rename the operations being
speculated. The results of applying this to the 6 definitions of y1 (now y1, y2, y3, y4, y5,and y6) appear
in
Figure
6. Speculation and renaming may require the duplication of operations using the definition being
speculated, since there may now be multiple reaching definitions. When speculating y1, the operation
v1=y1+5 had to be duplicated and guarded on the appropriate FPP (though in Figure 6 these statements
are shown after they, too, have been speculated). This is possible because PSSA previously created all the
necessary FPPs and path information.
If the guarding predicate has been defined by the operation's earliest schedulable cycle, we do not
apply PSpec. It is again scheduled at its earliest schedulable cycle, but guarded by the guarding predicate
assigned by PSSA. The instruction z1=w1+5 is an example. The algorithm for PSpec instruction scheduling
is shown in Figure 7.
Using PSpec, the hyperblock can now be scheduled in 6 cycles as compared to 9 cycles in Figure 5.
Since PSpec is applied whenever the definition of the operation's guarding predicate occurs later than
the earliest schedulable cycle of the operation, we could reduce the number of operations that need to be
speculated by moving the definition of the guarding predicates earlier. The goal of the next optimization,
Control Height Reduction, is to allow predicates to be defined as early as possible.
5.1.2 Branches and Speculation
We chose not to PSpec branches. Therefore, a branch statement's earliest schedulable cycle is the one in
which its guarding predicate is known. However, if a branch has been predicated on its block predicate
by PSSA (because it does not have multiple operand versions reaching it) then it may be unnecessarily
delayed in scheduling by waiting for that block predicate to be computed. As shown in Figure 6, we may
choose to duplicate this statement, much as we do in normal PSpec, and guard the execution of these
duplicates on their respective FPPs, instead of predicating the single instruction on its block predicate.
5.2 Control Height Reduction
Control Height Reduction (CHR) eases control constraints between multiple control statements. CHR
allows successive control operations on the control path to be scheduled in the same cycle, effectively
reducing control dependence height. For example, in the code in Figure 6, the control comparisons for
z1?7 and t1?r1 are scheduled in cycles 3 and 4, respectively. However, the second comparison is only
waiting for the definition of its guarding predicate AGF.
To schedule it earlier, consider the PSSA dependence graph in Figure 5. The definition of BAGF (defined
by the condition t1?r1), is control dependent on the definition of AGF (defined by the condition z1?7). We
could define BAGF directly as the logical AND of the conditions z1?7 and t1?r1 removing the dependence
on the definition of AGF. This AND expression could be scheduled in cycle 3.
Control Height Reduction was proposed in [27]. It was successfully used to reduce the height of control
recurrences found in loops when applied to superblocks. A superblock is a selected trace of basic blocks
through the control flow graph containing only one path of control [26]. The path-defining aspects of PSSA
F=true if true 1
AGF,A HF cmpp.un.uc z1>7 if F 3
BAGF, CAGF cmpp.an.an z1>7 if true 3
BAGF, CAGF cmpp.an.ac t1>r1 if true 3
BAHF, CAHF cmpp.ac.ac z1>7 if true 3
BAHF, CAHF cmpp.an.ac t1>r2 if true 3
LBAGF,EBAGF cmpp.an.an z1>7 if true 3
LBAGF,EBAGF cmpp.an.an t1>r1 if true 3
LBAGF,EBAGF cmpp.an.ac w1>2 if true 3
LBAHF,EBAHF cmpp.ac.ac z1>7 if true 3
LBAHF,EBAHF cmpp.an.an t1>r2 if true 3
LBAHF,EBAHF cmpp.an.ac w1>2 if true 3
ECAGF,EDCAGF cmpp.an.an z1>7 if true 3
ECAGF,EDCAGF cmpp.ac.ac t1>r1 if true 3
ECAGF, EDCAGF cmpp.an.ac t1>7 if true 3
ECAHF,EDCAHF cmpp.ac.ac z1>7 if true 3
ECAHF,EDCAHF cmpp.ac.ac t1>r2 if true 3
ECAHF, EDCAHF cmpp.an.ac t1>7 if true 3
t2=t1-s if true 2
br out if LBAGF 3
br out if LBAHF 3
Figure
8: Extended example after PSpec and CHR optimizations have been applied. Cmpp instructions
displayed in italics define predicates that are not used after optimization. Therefore, the statements can
be removed from the final code.
allow our algorithm to effectively apply CHR to predicated hyperblocks, since the full-path predicates
expose all of the original, separate paths throughout the hyperblock.
Schlansker et. al. [28] recently expanded on their previous research, applying speculation prior to
attempting height reduction. Speculation is needed to remove dependences between the branch conditions
that need to be combined to accomplish the reduction. However, in that work, speculation was limited
to operations that would not overwrite a live register or memory value if speculated, since they did not
use renaming. In Figure 5, the cmpp operation defining BAGF and CAGF is shown scheduled at cycle 5 due
to dependences on t1 and r1. PSSA allows us to apply PSpec and schedule these definitions in cycle 1,
making the cmpp available for CHR as shown in Figure 8.
5.2.1 Instruction Scheduling with PSpec and CHR
During instruction scheduling, PSpec is performed as described in Section 5.1.1. During the same sequential
pass through instructions, for each control operation (cmpp), CHR is performed if possible.
Recall that the operations in Figure 5 are scheduled in the order given in the PSSA hyperblock. Like
PSpec, CHR compares an operation's earliest schedulable cycle with when it must be scheduled if it waited
for its guarding predicate to be defined. If it does not need to wait on the definition of its guarding
predicate, it is simply scheduled at its earliest schedulable cycle. Without Pspec, the definition of BAGF
was waiting on the definition of t1 and r1. With Pspec, it is only waiting on the definition of its guarding
predicate. Therefore, it is beneficial to control height reduce.
By ANDing the condition of the current definition with the condition that defined its guarding predicate,
we can schedule this definition earlier. If the definition of the guarding predicate involved conditions that
were ANDed as well, all of the conditions must be included, so the number of cmpp statements needed to
define the current operation increases. The .a tag on each of these cmpp statements indicates that all of
them are required for the final definition.
Consider the operations z1?7, t1?r1 and t1?7 in Figure 5. We control height reduce these operations
in
Figure
8, since they are all schedulable in cycle 3 based on our scheduling constraints. The definition of
ECAGF now describes the combination of z1?7 being true AND t1?r1 having a value of false AND t1?7
having a value of true. We implement this logical AND using the .ac and .an qualifiers. The definition
of ECAGF requires that the conditions z1?7 and t1?7 and the condition !(t1?r1) evaluate to true for the
FPP to get a value of true. If any one of the requirements are not met, the FPP will be set to false. The
compares can be performed in the same cycle [19], allowing multiple links in a control path to be defined
simultaneously. The algorithm for CHR is found in Figure 9.
f
if (cmpp op.guarding pred defined
by cmpp op.earliest schedulable cycle)
f
cmpp op.schedule(cmpp op.earliest schedulable cycle)
f
Apply Control Height Reduction */
else
f
while (more stmts defining(cmpp op.guarding pred))
f
next def=next defining stmt(cmpp op.guarding pred)
copy=duplicate(next def)
copy.schedule(next def.get scheduling time())
copy.predicate on(next def.get guarding pred())
copy.set define(cmpp op.get pred defined())
copy.set tag to(a)
cmpp op.schedule(next def.get scheduling time())
cmpp op.predicate on(next def.get guarding pred())
cmpp op.set tag to(a)
Figure
9: Basic Control Height Reduction Algorithm.
Using PSpec and CHR on PSSA-transformed code results in the 4 cycle schedule shown in Figure 8.
Note that the operations shown in italics can be removed in a post-pass because these operations define
predicates that are never used. Using predicated speculation and control height reduction together on
PSSA-transformed code allows every operation to be scheduled at its earliest schedulable cycle.
6 Results
We have implemented algorithms to perform PSSA, CHR and PSpec on hyperblocks in the Trimaran
System (Version 2.00). We collect profile-based execution weights for operations in the codes and schedule
operations with an assumed one-cycle latency in order to calculate execution time. Additionally, we
conservatively assume that a load is dependent on all prior stores along a given path, and that a store is
dependent on prior stores as well. We also ensure that all instructions along a path leading to a branch
out of the hyperblock are executed prior to exiting the hyperblock.
Figure
shows normalized execution time when applying our optimizations for several Trimaran
benchmarks: fib, mm, wc, fir, wave, nbradar (a Trimaran media benchmark), qsort, alvinn (from
Percent
Execution
Time
of
Original
16-way
Original 16-way Original infinite
Optimized 16-way Optimized infinite
Figure
10: Executed cycles normalized to the number of cycles to execute the original code produced by
Trimaran for a 16 issue machine.
SPECFP92), compress (from SPECINT95), and li (from SPECINT95). These codes are described in the
Trimaran Benchmark Certification [2]. The original execution times are created from the default Trimaran
settings, with the exception that the architecture issue rate is set to 16. Execution time is estimated by
summing together the frequency of execution of each hyperblock multiplied by the number of cycles it takes
to execute the hyperblock assuming a perfect memory system. Infinite results do not restrict the number
of operations issued per cycle. 16-way results are obtained by dividing each cycle which has been scheduled
with more than 16 operations into ceiling(total operations scheduled in cycle / 16) cycles. The
results are normalized to the original schedule generated by Trimaran for a 16-issue machine and scheduled
16-way. The optimized results show the performance after applying PSSA, PSpec, and CHR. The results
show that using PSSA with PSpec and CHR results in a significant reduction in executed cycles.
Figure
11 shows the average number of operations executed per cycle for the configurations examined
in
Figure
10. In comparing the two graphs for the 16-way results, 3-4 times as many instructions are
issued per cycle after applying PSSA, PSpec, and CHR, and this resulted in a reduction in execution time
ranging from 12% to 68%. Since PSpec and CHR as applied to PSSA code have the effect of removing the
restrictions of control dependence, the optimized infinite results provide a picture of "best case" instruction
level parallelism. Inspection of the optimized infinite results of alvinn, compress, and li show that, given
current hyperblock formation, peak IPC is somewhat limited.
The renaming required by PSSA and PSpec also significantly increases register pressure. Trimaran's
ISA (Playdoh) supports 4 register files: general purpose, floating point, branch, and predicate [2, 19].
Average
instructions
per
cycle
Original 16-way Original infinite
Optimized 16-way Optimized infinite
Figure
Weighted average number of operations scheduled per cycle for hyperblocks when using PSSA
with Predicated Speculation and Control Height Reduction. Note that several of the "Optimized infinite"
results are greater than 16 - the issue width simulated in these experiments.10305070
Average
Live
Registers
Original Optimized
fib mm wc fir
wave nbradar qsort compress
alvinn li
Figure
12: Weighted average register pressure in hyperblocks when using PSSA with Predicated Speculation
and Control Height Reduction. Shown from left to right for each benchmark is the general purpose file,
predicate file, branch file, and floating point file (zero utilization for some benchmarks).
Percent
Code
Expansion
Static Dynamic
Figure
13: Static and Dynamic Code Expansion normalized to original code size. Dynamic code expansion
indicates an increase in the working set size to be supported by the instruction cache.
Figure
12 shows the average number of live registers for the original code and the optimized code using
PSSA, PSpec and CHR. The average live register results are weighted by the frequency of hyperblock
execution. For example, matrix multiply has on average 17 live general purpose registers in the original
code, and 54 live general purpose registers after optimization. Though the increase in utilization of all
these register files is notable, the weighted average utilization mostly still remains within the reported
register file sizes (128 general purpose, 128 floating point, 8 branch, and 64 predicate) [3].
Additionally, PSSA combined with aggressive PSpec and CHR significantly increases code size - both
static and dynamic. Aggressive and resource insensitive application of CHR and PSpec aims to reduce
cycles required to schedule at the cost of duplicated code specialized for particular paths (in the case of
PSpec) or duplicated code for faster computation of predicates (in the case of CHR). Figure 13 shows
both the static and dynamic code expansion of the PSSA, PSpec, and CHR optimized code over the
original code. We calculate static code expansion by comparing the number of static operations in the
optimized code with the number of static operations in the original code. Dynamic code expansion is
measured similarly, with the exception that each static operation is weighted by the number of times that
it is executed (as calculated by Trimaran's profile-based region weights). This ``dynamic code expansion''
is intended to capture the run-time effect that the introduced duplicated code will have on the memory
system. Dynamic code expansion indicates an increase in the working set size to be supported by the
instruction cache.
7 Related Work
Predicated execution presents challenges and prospects that researchers have addressed in a variety of
ways. Mahlke et. al. [21] showed that predicated execution can be used to remove an average of 27% of the
executed branches and 56% of the branch mispredictions. Tyson also found similar results and correlated
the relationship between predication and branch prediction [29].
In an effort to relieve some of the difficulties related to applying compiler techniques to predicated
code, Mahlke et. al. [22] defined the hyperblock as a single-entry, multiple-exit structure to help support
effective predicated compilation. These hyperblocks are formed via selective If-conversion [5, 24] - a
technique that replaces branches with predicate define instructions. The success of predicated execution
can depend greatly on the region of the code selected to be included in the predicated hyperblock. August
et. al. [9] relates the pitfalls and potentials of hyperblock formation heuristics that can be used to guide the
inclusion or exclusion of paths in a hyperblock. Warter et. al. [30] explore the use of Reverse If-conversion
for exposing scheduling opportunities in architectures lacking support for predicated execution as well as
for re-forming hyperblocks to increase efficiency for predicated code [9, 30].
The challenges of doing data-flow and control-flow analysis on hyperblocks have also been addressed.
Since hyperblocks include multiple paths of control in one block, traditional compiler techniques are often
too conservative or inefficient when applied to them. Methods of predicate-sensitive analysis have been
devised to make traditional optimization techniques more effective for predicated code [15, 18]. The work
presented in [11] (and expanded upon in this work) extended the localized predicate-sensitive analysis
presented in [15, 18] to complete path analysis through the hyperblock. Path-sensitive analysis has
previously been found useful for traditional data-flow analysis [6, 10, 16]. We use this specialized path
information to accomplish PSSA (a predicate-sensitive form of SSA [13, 12]) which enables Predicated
Speculation and Control Height Reduction for hyperblocks that have previously been examined only in the
presence of the single path of control found in superblocks [26, 27, 28].
Moon and Ebcioglu [23] have implemented selective scheduling algorithms, which can schedule operations
at their earliest possible cycle for non-predicated code. Our work extends theirs for predicated code,
by allowing earliest possible cycle scheduling using predicated renaming with full-path predicates.
Implementing PSSA in IA-64
Implementing PSSA using the IA-64 ISA [3] would be straightforward with the exception of the predicate
statement we introduced. We found this OR statement to be very useful in efficiently combining path
information in order to eliminate unnecessary code expansion. If this instruction were not explicitly added
to IA-64 then it could be implemented by transferring the predicate register file into a general register
using the move from predicate instruction in IA-64. The general purpose masking instruction would then
be used to mask all but the bits corresponding to the sources of the predicate OR instruction. A result of
zero evaluates to false, and anything else evaluates to true.
IA-64, unlike the Playdoh ISA, places limits on compare instructions. For example, conditions that are
included in logical AND compare statements can only compare a variable to zero. Specifically, the statement
LBAGF,EBAGF cmpp.an.an t1?r1 if true in Figure 8 would not be permitted. In implementing CHR,
we would have to transform the prior expression into the following 2 statements (expressed in IA-64
9 Future Work
When constructing a hyperblock schedule for a specific processor implementation, resource limits will
mandate how many operations can be performed in each cycle. Architectural characteristics such as
issue width, resource utilization, number of available predicate registers, and number of available rename
registers all need to be considered when creating an architecture-specific schedule. The goal of a hyperblock
scheduler is to reduce the execution-height while taking these architectural features into consideration.
In this paper, our goal was to show that PSSA provided an efficient form of renaming and precise path
information to allow all operations to be scheduled at their earliest schedulable cycle. We are currently
examining different PSSA representations to reduce code duplication and the number of full-path predicates
created. Since various control paths through a hyperblock may have different true data dependence
heights, it may provide no advantage to speculate operations that are not on the critical path through
the hyperblock. PSSA could concentrate on only the critical paths through the hyperblock, reducing code
duplication. For non-critical paths, it may be advantageous in PSSA to implement OE-functions combining
different variable names, instead of maintaining renamed variables for each full-path in the hyperblock. At
a point in the hyperblock where all paths join, copy operations could be used to return renamed definitions
to original names. Path definitions could then be restarted at this point. This would reduce the amount
of duplication required for a given operation to use correctly renamed variables. Our future research
concentrates on these issues and creating a more efficient implementation of PSSA.
Conclusions
This paper extended [11], where Predicated Static Single Assignment was first introduced. It motivated
the need for renaming and for predicate analysis that extends across all paths of the hyperblock. It
demonstrated how Predicated Static Single Assignment (PSSA), a predicate-sensitive implementation of
SSA that implements renaming using full-path predicates, can be used to eliminate false dependences
for predicated code. We showed the benefit of using PSSA to enable Predicated Speculation (PSpec)
and Control Height Reduction (CHR) during scheduling. Predicated Speculation allows operations to be
executed at their earliest schedulable cycle, even before their guarding predicates are determined. Control
Height Reduction allows guarding predicates to be defined as soon as possible, reducing the amount of
speculation needed.
By maintaining information about each of the original control paths in a hyperblock, PSSA can provide
information that allows precise placement of renamed and speculated code, and allows the correct,
renamed values to be propagated to subsequent operations. The renaming used by PSSA allows more aggressive
speculation, as overwriting live values is no longer a concern. In addition, PSSA supports Control
Height Reduction along every control path using full-path predicates, reducing control dependence depth
throughout the hyperblock.
Our experiments show that PSSA is an effective tool for optimizing predicated code. We gave extended
experiments that show using PSSA with PSpec and CHR results in a reduction in executed cycles ranging
from 12% to 68% for a 16 issue machine.
Acknowledgments
We would like to thank the Compiler and Architecture Research Group at Hewlett Packard, University
of Illinois' IMPACT Group, and New York University's ReaCT-ILP Group for providing Trimaran. We
specifically appreciate the time and patience of Rodric Rabbah, Scott Mahlke, Vinod Kathail, and Richard
Johnson in answering many questions regarding the Trimaran system. In addition, we would like to thank
Scott Mahlke for providing useful comments on this paper. This work was supported in part by NSF
CAREER grant No. CCR-9733278, a National Defense Science and Engineering Graduate Fellowship, a
research grant from Intel Corporation, and equipment support from Hewlett Packard and Intel Corporation.
--R
Merced processor and IA-64 architecture
Conversion of control dependence to data dependence.
Improving data-flow analysis with path profiles
Integrated predicated and speculative execution in the IMPACT EPIC architecture.
The IMPACT EPIC 1.0 Architecture and Instruction Set reference manual.
A framework for balancing control flow and predication.
Efficient path profiling.
Predicated static single assignment.
An efficient method of computing static single assignment form.
Efficiently computing static single assignment form and the control dependence graph.
Compiling for the Cydra 5.
Global predicate analysis and its application to register allocation.
Path profile guided partial dead code elimation using predication.
HP make EPIC disclosure.
Analysis techniques for predicated code.
HPL PlayDoh architecture specification: Version 1.0.
The Multiflow Trace Scheduling compiler.
Characterizing the impact of predicated execution on branch prediction.
Effective compiler support for predicated execution using the hyperblock.
Parallelizing nonnumerical code with selective scheduling and software pipelining.
On Predicated Execution.
The Cydra 5 departmental supercomputer.
Critical path reduction for scalar programs.
Height reduction of control recurrences for ILP processors.
Control CPR: A branch height reduction optimization for EPIC architectures.
The effects of predicated execution on branch prediction.
Reverse if-conversion
High Performance Compilers for Parallel Computing.
--TR
--CTR
Fubo Zhang , Erik H. D'Hollander, Using Hammock Graphs to Structure Programs, IEEE Transactions on Software Engineering, v.30 n.4, p.231-245, April 2004
Mihai Budiu , Girish Venkataramani , Tiberiu Chelcea , Seth Copen Goldstein, Spatial computation, ACM SIGARCH Computer Architecture News, v.32 n.5, December 2004 | renaming;static single assignment;instruction scheduling;predicated execution |
608786 | Achieving Scalable Locality with Time Skewing. | Microprocessor speed has been growing exponentially faster than memory system speed in the recent past. This paper explores the long term implications of this trend. We define scalable locality, which measures our ability to apply ever faster processors to increasingly large problems (just as scalable parallelism measures our ability to apply more numerous processors to larger problems). We provide an algorithm called time skewing that derives an execution order and storage mapping to produce any desired degree of locality, for certain programs that can be made to exhibit scalable locality. Our approach is unusual in that it derives the transformation from the algorithm's dataflow (a fundamental characteristic of the algorithm) instead of searching a space of transformations of the execution order and array layout used by the programmer (artifacts of the expression of the algorithm). We provide empirical results for data sets using L2 cache, main memory, and virtual memory. | Introduction
The widening gap between processor speed and main memory speed has generated interest in compile-time
optimizations to improve memory locality (the degree to which values are reused while still in cache [WL91]).
A number of techniques have been developed to improve the locality of \scientic programs" (programs that
use loops to traverse large arrays of data) [GJ88, WL91, Wol92, MCT96, Ros98]. These techniques have
generally been successful in achieving good performance on modern architectures. However, the possibility
that processors will continue to outpace memory systems raises the question of whether these techniques can
be \scaled" to produce ever higher degrees of locality.
We say a calculation exhibits scalable locality if its locality can be made to grow at least linearly with the
problem size while using cache memory that grows less than linearly with problem size. In this article, we
show that some calculations cannot exhibit scalable locality, while others can (typically these require tiling).
We then discuss the use of compile-time optimizations to produce scalable locality. We identify a class of
calculations for which existing techniques do not generally produce scalable locality, and give an algorithm
for obtaining scalable locality for a subset of this class. Our techniques make use of value-based dependence
relations [PW93, Won95, PW98], which provide information about the
ow of values in individual array
elements among the iterations of a calculation. We initially ignore issues of cache interference and of spatial
locality, and return to address these issues later.
We dene the balance of a calculation (or compute balance) as the ratio of operations performed to the
total number of values involved in the calculation that are live at the start or end. This ratio measures the
This is supported by NSF grant CCR-9808694
for (int
for (int
Figure
1: Three-Point Stencil in Single-Assignment Form
initialize C to zero
for (int
for (int
for (int
Figure
2: Matrix Multiplication
degree to which values are reused by a calculation, and thus plays a role in determining the locality of the
running code. It is similar to McCalpin's denition of machine balance as the ratio of a processor's sustained
oating point operation rate to a memory system's sustained rate of transferring
oating point numbers
[McC95].
In some cases, limits on the total numbers of operations performed and values produced place absolute
limits on compute balance, and therefore on the locality that could be achieved if that code were run in
isolation. For example, if the entire array A is live at the end of the loop nest shown in Figure 1, the
balance of this nest is approximately 3 (N values are live at entry, and N T are live at exit, after 3 N T
operations). If all the values produced had to be written to main memory, and all of the values that are live
come from main memory, then we must generate one unit of memory tra-c (one
oating point value read
or written) for every three calculations performed. For other codes, such as matrix multiplication (Figure
2), the balance grows with the problem size. Thus, for large matrices, we may in principle achieve very high
cache hit rates.
Note that compute balance depends on information about what values are live. For example, if only
A[T][*] is live at the end of Figure 1, then the balance of this code is 3T . This raises the hope that we can
achieve scalable locality if we do not store the other values in main memory. Compute balance also depends
on the scope of the calculation we are considering. If all elements of A are killed in a second loop nest that
follows the code in Figure 1, and that nest produces only N values, the balance of these two nests could be
higher than the balance of Figure 1 alone. Once again, this raises the hope of improving locality, this time
by keeping values in cache between the two nests.
One way to achieve locality proportional to compute balance would be to require a fully associative cache
large enough to hold all the intermediate values generated during a calculation. Our denition of scalable
locality explicitly rules out this approach.
To achieve scalable locality, we must divide the calculation into an ordered sequence of \stripes" such
that (a) executing the calculation stripe-by-stripe produces the same result, (b) each stripe has balance
proportional to the problem size, (c) the calculations in each stripe can be executed in an order such that
the number of temporary values that are simultaneously live is \small". A value is considered temporary if
its lifetime is contained within the stripe, or if it is live on entry to the entire calculation but all uses are
within the stripe. By \small", we mean to capture the idea of data that will t in cache, without referring to
initialize C to zero
for (int
for (int
for (int
for (int
for (int
Figure
3: Tiled Matrix Multiplication (from [WL91])
for (int
for (int
for (int
Figure
4: Time-Step Three Point Stencil
any particular architecture. In particular, we wish to avoid cache requirements that grow linearly (or worse)
with the size of the problem. In some cases, such as the simple stencil calculations discussed in [MW98] we
can limit these sizes to functions of the machine balance; in other cases, such as the TOMCATV benchmark
discussed in Section 4, the cache requirement grows sublinearly with the problem size (for TOMCATV it grows
with the square root of the size of the input, as well as with the balance).
Existing techniques can produce scalable locality on some codes. For example, tiling matrix multiplication
produces scalable locality. Figure 3 shows the resulting code, for tile size s. In our terminology, the
jb and kb loops enumerate s 2 stripes, each of which executes n tiles of size s 2 , and has a balance of 2ns 2
Within each stripe, the total number of temporaries that are live simultaneously does not exceed s
(for one tile of B and one column of a tile of A and C). Thus, by increasing s to match the machine balance,
we could achieve the appropriate locality using a cache of size O(s 2 ) (ignoring cache interference). However,
there are calculations for which current techniques cannot produce scalable locality, as we will see in the
next section.
The remainder of this paper is devoted to a discussion of achieving scalable locality for a class of calculations
we call time-step calculations. Section 2 denes this class of calculations, and shows how to produce
scalable locality for a simple example via time skewing [MW98]. Section 3 generalizes time skewing beyond
the limited class of problems discussed in [MW98]. Section 4 presents empirical studies on benchmark codes.
Section 5 discusses other techniques for improving locality, and Section 6 gives conclusions.
Calculations and Time Skewing
We say that a calculation is a time-step calculation if it consists entirely of assignment statements surrounded
by structured if's and loops (possibly while loops or loops with break statements), and all loop-carried value
based
ow dependences come from the previous iteration of the outer loop (which we call the time loop). For
example, the three point stencil calculation in Figure 4 is a time-step calculation. It computes a new value of
for (int
for (int
Figure
5: Three Point In-Place Stencil (from [WL91])
cur[i] from the values of cur[i-1.i+1] in the previous iteration of t. This
ow of values is essentially the
same as that shown in Figure 1. In contrast, the value computed in iteration [t; i] of the \in-place" stencil
shown in Figure 5 is used in iteration [t; so we do not call this loop nest a time-step
calculation.
If only the values from the last time step are live after the end of the time loop, and all values that
are live on entry to the calculation are read in the rst time step, the balance of a time-step calculation
is proportional to the number of time steps. Thus, we may be able to achieve scalable locality for such
calculations (by producing stripes that combine several time steps).
The techniques presented by Wolf and Lam [WL91, Wol92] can be used to achieve scalable locality for
Figure
5, but they cannot be applied to calculations with several loop nests, such as the time-step calculation
in
Figure
4. In [MW98], we describe the time skewing transformation, which can be used to achieve scalable
locality for Figure 4. As originally formulated, this transformation could only be applied to time-step stencil
calculations (in which each array element is updated using a combination of the element's neighbors), and
only if the stencil has only one statement that performs a calculation (statements that simply move values,
such as the second assignment in Figure 4, are allowed). In the next section, we describe a more general
form of time skewing. The remainder of this section reviews the original formulation of time skewing, as it
applies to Figure 4.
The essential insight into understanding time skewing is that it applies a fairly conventional combination
of skewing and tiling to the set of dependences that represent the
ow of values (rather than memory aliasing).
For example, in Figure 4, the value produced in iteration [t; i] of the calculation is used in iterations [t+1; i 1],
we had a single loop nest with this dependence pattern, the algorithm of Wolf
and Lam would skew the inner loop with respect to the time loop, producing a fully permutable nest. It
would then tile this loop nest to achieve the appropriate degree of locality.
We can perform this skewing and tiling if we rst expand the cur array and forward substitute the value
of old. In the resulting code, there are O(B) values produced in each tile for consumption in the next
tile (where B is the tile size). A stripe of N such tiles produces O(N) values while performing O(N B)
operations on O(N B) temporaries, O(B) of which are live simultaneously. Thus, we may hope to achieve
scalable locality. Unfortunately, expanding the cur array causes each temporary to be placed in a unique
memory location, which does not yield an improvement in memory locality.
To improve locality, we must either recompress the expanded array in a manner that is compatible with
the new order of execution, or perform the skewing and tiling on the original imperfect loop nest (this
requires a combination of unimodular and non-unimodular transformations). Both of these approaches are
discussed in detail in [MW98]. In the code that results from the rst, all iterations except those on the
borders between tiles do all their work with a single array that is small enough to t in cache. This array
should reside entirely in cache during these iterations, allowing us to ignore issues of spatial locality and
cache interference. Furthermore, this technique lets us optimize code that is originally presented in single-assignment
like Figure 1. However, this approach taxes the code generation system
that we use to its limits, even for single-statement stencils. This causes extremely long compile times and
produces code with a great deal of additional integer math overhead due to the loop structure [SW98].
3 A General Algorithm for Time Skewing
In this section, we present a more general algorithm for time skewing time-step calculations, using as an
example the code from the TOMCATV program of the SPEC95 benchmark set (shown in Figure 6). We begin
by giving the domain of our algorithm and our techniques to coerce aberrant programs into this domain,
and then present the algorithm itself.
3.1 The Domain of Our Algorithm
As with any time-step calculation, all loop-carried data
ow must come from the previous iteration of the time
loop (in TOMCATV, the t loop). Note that the j and i loops of the second nest (the \determine maximum"
nest) carry reduction dependences [Won95], and do not inhibit time skewing.
The j loops in the fourth and sixth nests (the two dimensional loops under \solve tridiagonal") do carry
data
ow, so these at rst appear to prevent application of time skewing. However, we can proceed with
the algorithm if we treat each column of each array as a single vector value, and do not attempt to block
this dimension of the iteration space (this will have consequences for our cache requirements, as we shall see
below). It may be possible to extend our algorithm to handle cases in which there is a loop that carries a
dependence in only one direction along such a vector, but we have not investigated this possibility. In this
case, the fourth nest carries information forward through the j dimension, and the sixth carries it backward
along this dimension, which rules out skewing in this dimension.
Our algorithm is restricted to the subset of time-step calculations that meet the following criteria.
3.1.1 A-ne control
ow
All loop steps must be known, and all loop bounds and all conditions tested in if statements must be a-ne
functions of the outer loop indices and a set of symbolic constants. This makes it possible to describe the
iteration spaces with a set of a-ne constraints on integer variables, which is necessary because we use the
Omega Library [KMP + 95] to represent and transform these spaces. We do allow one exception to this rule,
however: conditions controlling the execution of the outer loop need not be a-ne. Such conditions may
occur due to breaks, while loops, or simply very complicated loop bounds. These are all handled in the same
way, though we present our discussion in terms of break statements, since this is what occurs in TOMCATV.
3.1.2 Uniform loop depth and restricted intra-iteration data
ow
Every statement within the time loop must be nested within the same number of loops, and the
ow of
information within an iteration of the time loop must connect identical indices of all loops surrounding the
denition and use. For example, consider the value produced by the last statement in the rst nest (\nd
residuals"). The value produced in iteration [t; j; i] (and stored in ry(i,j)) is used in iteration [t; j; i] of the
fourth nest. Note that the reference to ry(i,j-1) in this statement does not cause trouble because we have
already given up on skewing in the j dimension.
In some cases we may be able to convert programs into the proper form by simply reindexing the iteration
space. For example, if the rst i loop ran from 1 to n-2 and produced ry(i+1,j), we could simply bump
the i loop by 1. If the calculation involves nests of dierent depths, we add single-iteration loops around
the shallower statements. The third loop nest of TOMCATV (the rst in the \solve tridiagonal" set) has only
two dimensions, so we add an additional j loop from 2 to 2 around the i loop.
We perform this reindexing by working backwards from the values that are live at the end of an iteration
of the time loop (in this example, rxm(t) and rym(t), which are used later, and x(*) and y(*), which
do
// find residuals of iteration t
do
do
2.
2.
2.
2.
// determine maximum values rxm, rym of residuals
do
do
// solve tridiagonal systems (aa,dd,aa) in parallel, lu decomposition
do
do
do
do
do
do
// add corrections of t iteration
do
do
if
Figure
benchmark from SPEC95
are used in iteration t + 1). We tag the loops containing the writes that produce these values as \xed",
and follow the data
ow dependences back to their source iterations (for example, the data
ow to iteration
[t; j; i] of the write to y in the last nest comes from iteration [t; j; i] of the sixth nest when j < n 1, and
[t; i] of the fth nest when We then adjust the iteration spaces of the loops we reach in this way,
and \x" them. The sixth nest is simply xed, and the fth has a j loop from n-1 to n-1 wrapped around
it (since iteration [t; n 1; i] reads this value). If we ever need to adjust a xed loop, the algorithm fails (at
least in one dimension). We then follow the data
ow from the statements we reached in this nest, and so
on, until we have explored all data
ow arcs that do not cross iterations of the time loop. Any loops that are
not reached by this process are dead and may be omitted.
3.1.3 Finite inter-iteration data
ow dependence distances
For our code generation system to work, we must know the factor by which we will skew. This means that
we must be able to put known (non-symbolic) upper and lower bounds on the dierence between inner loop
indices for each time-loop-carried
ow of values. For example, the rst loop nest of TOMCATV reads x(i+1,j),
which was produced in iteration [t our upper bound on the dierence in the i dimension
must be at least 1. We cannot apply our current algorithm to code with coupled dependences (e.g., if the
rst nest read x(i+j,j)).
3.2 Time Skewing
Consider a calculation from the domain described above, or a calculation such as TOMCATV for which certain
dimensions are within this domain. All values used in an iteration of the time loop come from within a xed
distance - l of the same iteration of loop l in the previous time step. Therefore, the
ow of information
does not interfere with tiling if we rst skew each loop by a factor of - l . This is the same observation we
made for Figure 4 in Section 2. In fact, if we think if all the j dimensions of all seven arrays as a single 7 by
N matrix value, the data
ow for TOMCATV is identical to that of Figure 4.
We therefore proceed to skew and tile the loops as described above. This requires that we fuse various
loop nests that may not have the same size. Fortunately, this is relatively straightforward with the code
generation system [KPR95] of the Omega Library. We simply need to provide a linear mapping from the
old iteration spaces to the new. The library will then generate code to traverse these iteration spaces in
lexicographical order. Given g loops l 1 that are within our domain, and e loops e that are
not, we produce the iteration space
t;
(l
where nn is the number of the nest in the original ordering, and B is the size of the tiles we wish to produce.
This is the same as the formulation given in [MW98], with the addition of the constant levels and levels not
within the domain.
For TOMCATV, we transform the original set of iteration spaces from [ t; nn;
where nn ranges from 1 to 8 (the initialization of rxm and rym counts as a nest of 0 loops).
The resulting g outer loops traverse a set of stripes, each of which contains T B g iterations that run
each statement through all the e loops. The g + e inner loops that constitute a tile perform O(B g E)
operations on O(B g E)
oating point values, where E is the total size of the \matrix" that constitutes the
value produced by the e loops executing all for statements. All but O(B g 1 E) values are consumed by the
next tile, providing stripes with O(B) balance, and, as long as E grows less than linearly with the size of the
problem, the hope of scalable locality. (In TOMCATV, E is a set of seven arrays of size N , while the problem
involves arrays of size N 2 , so our cache requirement will grow with the square root of the problem size).
As at the end of Section 2, we are left with the question of how to store these values without either
corrupting the result of the calculation (e.g. if we use the original storage layout) or writing temporaries
to main memory in su-cient quantities to inhibit scalable locality (e.g. by fully expanding all arrays).
In principle, if we know E, we could apply the layout algorithm given in [MW98], producing an array of
temporaries that will t entirely in cache. It may even be possible to develop an algorithm to perform this
operation with B and E as symbolic parameters. However, in the absence of major improvements to the
implementation of the code generation systems of [KPR95] and [SW98], this method is impractical.
Instead, we simply expand each array by a factor of two, and use t%2 as the subscript in this new
dimension. This causes some temporaries to be written out to main memory, but the number is proportional
to the number of non-temporary values created, not the number of operations performed, so this does not
inhibit scalable locality. This method also forces us to contend with cache interference (which we simply
ignore at this point, though we could presumably apply algorithms for reducing interference to the code we
generate). Finally, our code may or may not traverse memory with unit stride. If possible, the arrays should
be transposed so that a dimension corresponding to the innermost loop scans consecutive memory locations.
3.2.1 Break statements
We apply our algorithm to a calculation involving a break that is guarded with a non-a-ne condition as
follows: we create an array of boolean values representing the value of the condition in each iteration, and
convert the statement into an expression that simply computes and saves this value. At the end of each
iteration of the outer loop (which steps through blocks of iterations in the original time loop), we scan this
array to determine if a break occurred in any time step in the time block we have just completed. If it has,
we record the number of the iteration in which the break occurred, roll back the calculation to the beginning
of the time block, and restart with the upper bound on the time loop set to the iteration of the break.
We can preserve the data that are present at the end of each time block by using two arrays for the values
that are live at the ends of time blocks (one for even blocks, and one for odd blocks). This can double the
total memory usage, but will not aect the balance or locality of the calculation (except to the degree that
it changes interference eects).
If it is possible to determine that a break does not aect the correctness of the result, we can avoid the
overhead of the above scheme by simply stopping the calculation at the end of the time block in which the
break occurred. Unfortunately, we know of no way to determine the purpose of a break statement without
input from the programmer (possibly in the form of machine-readable comments within the program itself).
4 Empirical Results
REVIEWERS: At this time, I only have results for the TOMCATV benchmark running with
virtual memory. For the nal version, I also expect to have results on one or more workstations, such as a
Sun Ultra/60 and several SGI machines, and include a larger set of benchmarks. Based on experiences with
stencil calculations, I expect that these machines will show either a small gain or a small loss in performance,
but that getting peak performance will require manually hoisting some loop-invariant expressions (or getting
a better complier.
To verify the value of time skewing in compensating for extremely high machine balance, we tested it
using the virtual memory of a Dell 200MHz Pentium system running Linux. This system has 64 M of main
memory, 128K of L2 cache, and 300M of virtual memory paged to a swap partition on a SCSI disk. This
test was designed to test the value of time skewing on a system with extremely high balance.
We transformed the TOMCATV benchmark according to the algorithm given in the previous section, except
for the break statement (this break is not taken during execution with the sample data). We also increased
the array sizes from 513 by 513 to 1340 by 1340, to ensure that all seven arrays could not t into main
memory (the seven arrays together use about 96 Megabytes).
The original code required over 9 minutes per time step (completing a run with T=8 in 4500 seconds,
and a run with T=12 in 6900 seconds). For the time skewed code, we increased T to 192 to allow for a
su-ciently large block size. This code required under 20 seconds per time step (completing all 192 iterations
in 3500 seconds). Thus, for long runs, performance was improved by more than a factor of 30.
The loop nests produced by the time skewing transformation may be more complicated than the original
loops, so the transformed code may be slower than the original for small problems. For example, when the
original TOMCATV data set is used, the entire data set ts in main memory, but any tile size greater than 2
exceeds the size of the L2 cache. In this case, the time skewed code is slower by a factor of two.
5 Related Work
Most current techniques for improving locality [GJ88, WL91, Wol92, MCT96] are based on the search for
groups of references that may refer to the same cache line, assuming that each value is stored in the address
used in the original (unoptimized) program. They then apply a sequence of transformations to try to bring
together references to the same address. However, their transformation systems are not powerful enough
to perform the time skewing transformation: the limits of the system used by Wolf and Lam are given in
Section 2.7 of [Wol92]; McKinley, Carr, and Tseng did not apply loop skewing, on the grounds that Wolf
and Lam did not nd it to be useful in practice. Thus, these transformation systems may all be limited
by the bandwidth of the loops they are able to transform. For example, without the time loop, none of
the inner loops in TOMCATV exhibits scalable locality. Thus, there are limits to the locality produced by any
transformation of the body of the time loop.
Recent work by Pugh and Rosser [Ros98] uses iteration space slicing to nd the set of calculations that
are used in the production of a given element of an array. By ordering these calculations in terms of the nal
array element produced, they achieve an eect that is similar to a combination of loop alignment and fusion.
For example, they can produce a version of TOMCATV in which each time step performs a single scan through
each array, rather than the ve dierent scans in the original code. However, their system transforms the
body of the time loop, without reordering the iterations of the time loop itself, and is thus limited by the
nite balance of the calculation in the loop body.
Work on tolerating memory latency, such as that by [MLG92], complements work on bandwidth issues.
Optimizations to hide latency cannot compensate for inadequate memory bandwidth, and bandwidth optimizations
do not eliminate problems of latency. However, we see no reason why latency hiding optimizations
cannot be used successfully in combination with time skewing.
6 Conclusions
For some calculations, such as matrix multiplication, we can achieve scalable locality via well-understood
transformations such as loop tiling. This means that we should be able to obtain good performance for
these calculations on computers with extremely high machine balance, as long as we can increase the tile
size to provide matching compute balance. However, current techniques for locality optimization cannot, in
general, provide scalable locality for time-step calculations.
The time skewing transformation described here can be used to produce scalable locality for many such
calculations, though it increases the complexity of the loop bounds and subscript expressions. For systems
with extremely high balance, locality issues dominate, and time skewing can provide signicant performance
improvements. For example, we obtained a speedup of a factor of when running the TOMCATV benchmark
with arrays that required virtual memory.
--R
Strategies for cache and local memory management by global program transformation.
The Omega Library interface guide.
Code generation for multiple mappings.
Memory bandwidth and machine balance in current high performance computers.
Improving data locality with loop transformations.
Design and evaluation of a compiler algorithm for prefetching.
Time skewing: A value-based approach to optimizing for memory locality
An exact method for analysis of value-based array data dependences
Code generation for memory mappings.
A data locality optimizing algorithm.
Improving Locality and Parallelism in Nested Loops.
--TR
--CTR
Guohua Jin , John Mellor-Crummey, Experiences tuning SMG98: a semicoarsening multigrid benchmark based on the hypre library, Proceedings of the 16th international conference on Supercomputing, June 22-26, 2002, New York, New York, USA
Armando Solar-Lezama , Gilad Arnold , Liviu Tancau , Rastislav Bodik , Vijay Saraswat , Sanjit Seshia, Sketching stencils, ACM SIGPLAN Notices, v.42 n.6, June 2007
Kristof Beyls , Erik H. D'Hollander, Intermediately executed code is the key to find refactorings that improve temporal data locality, Proceedings of the 3rd conference on Computing frontiers, May 03-05, 2006, Ischia, Italy
Michelle Mills Strout , Larry Carter , Jeanne Ferrante , Barbara Kreaseck, Sparse Tiling for Stationary Iterative Methods, International Journal of High Performance Computing Applications, v.18 n.1, p.95-113, February 2004
Chen Ding , Maksim Orlovich, The Potential of Computation Regrouping for Improving Locality, Proceedings of the 2004 ACM/IEEE conference on Supercomputing, p.13, November 06-12, 2004
Zhiyuan Li , Yonghong Song, Automatic tiling of iterative stencil loops, ACM Transactions on Programming Languages and Systems (TOPLAS), v.26 n.6, p.975-1028, November 2004
Chen Ding , Ken Kennedy, Improving effective bandwidth through compiler enhancement of global cache reuse, Journal of Parallel and Distributed Computing, v.64 n.1, p.108-134, January 2004 | machine balance;compute balance;memory locality;storage transformation;scalable locality |
608853 | Deterministic Built-in Pattern Generation for Sequential Circuits. | We present a new pattern generation approach for deterministic built-in self testing (BIST) of sequential circuits. Our approach is based on precomputed test sequences, and is especially suited to sequential circuits that contain a large number of flip-flops but relatively few controllable primary inputs. Such circuits, often encountered as embedded cores and as filters for digital signal processing, are difficult to test and require long test sequences. We show that statistical encoding of precomputed test sequences can be combined with low-cost pattern decoding to provide deterministic BIST with practical levels of overhead. Optimal Huffman codes and near-optimal Comma codes are especially useful for test set encoding. This approach exploits recent advances in automatic test pattern generation for sequential circuits and, unlike other BIST schemes, does not require access to a gate-level model of the circuit under test. It can be easily automated and integrated with design automation tools. Experimental results for the ISCAS 89 benchmark circuits show that the proposed method provides higher fault coverage than pseudorandom testing with shorter test application time and low to moderate hardware overhead. | Table
1 illustrates the Huffman code for an example
test set TD with four unique patterns out of a total of
eighty. Column 1 of Table 1 lists the four patterns,
lists the corresponding number of occurrences
fi of each pattern Xi , and column 3 lists the
corresponding probability of occurrence pi , given by
fi =jTDj. Finally, column 4 gives the corresponding
Huffman code for each unique pattern. Note that the
most common pattern X1 is encoded with a single 0
bit; that is e.X1/ D 0, where e.X1/ is the codeword for
X1. Since no codeword appears as a prefix of a longer
codeword (the prefix-free property), if a sequence of
encoded test vectors is treated as a serial bit-stream, decoding
can be done as soon as the last bit of a codeword
is read. This property is essential since variable-length
codewords cannot be read from memory as words in
the usual fashion.
The Huffman code illustrated in Table 1 can be con-
structedbygeneratingabinarytree(Huffmantree)with
Table
1. Test set encoding for a simple example test sequence
of test patterns.
Unique Occur- Probability Huffman Comma
patterns rences of occurrence codeword codeword
100 Iyengar, Chakrabarty and Murray
Fig. 2. An example illustrating the construction of the Huffman
code.
edges labeled either 0 or 1 as illustrated in Fig. 2. Each
unique pattern Xi of Table 1 is associated with a (leaf)
node of the tree, which initially consists only of these
unmarked nodes. The Huffman coding procedure
iteratively selects two nodes vi and vj with the lowest
probabilities of occurrence, marks them, and generates
a parent node vij for vi and vj . If these two nodes are
not unique, then the procedure arbitrarily chooses two
nodes with the lowest probabilities. The edges (vij;vi)
and (vij;vj) are labeled 0 and 1. The 0 and 1 labels
are chosen arbitrarily, and do not affect the amount of
compression [18]. The node vij is assigned a probability
of occurrence pij D pi C pj. This process is
continued until there is only one unmarked node left in
the tree.
Each codeword e.Xi / is obtained by traversing the
path from the root of the Huffman tree to the corresponding
leaf node vi . The sequence of 0-1 values
on the edges of this path provides the e.Xi /. The
Huffman coding procedure has a worst case complexity
of O.m2 log m/, thus the encoding can be donein reasonable time. The average number of bits per
patternPlH (average length of a codeword) is given by
lH D imD1 wi pi , where wi is the length of the code-word
corresponding to test pattern Xi . The average
length of a codeword in our example is therefore given
by lH D 1 0:5625 C 2 0:1875 C 3 0:1875 C 3
0:0625 D 1:68 bits.
We next compare Huffman coding with equal-length
coding. Let lH .lE/ be the average length of a code-word
for Huffman coding (equal-length coding). Since
Huffman coding is optimal, it is clear that lH lE.We
next show that lH D lE under certain conditions.
Theorem 1. If all unique patterns have the same
probability of occurrence and the number of unique
patterns m is a power of 2; then lH D lE .
Proof: If all the unique patterns in TD have the
same probability of occurrence p D 1=m, the entropy
H.TD/ D imD1 pi log2 pi D log2 m bits. For equal-length
encoding, lE Ddlog2 me, and if m is a power of
then lE D log2 m, which equals the entropy bound.
Therefore, l DlE for this case.
The above theorem can be restated in a more general
form in terms of the structure of the Huffman tree.
Theorem 2. If the Huffman tree is a full binary tree;
then lH D lE .
Proof: A full binary tree with k levels has 2k 1
vertices, out of which 2k1 are leaf vertices. Therefore
if the Huffman tree is a full binary tree with m leaf
vertices, then m must be a power of 2 and the number
of levels must be log m C1. It follows that every pathfrom the root to a leaf vertex is then of length log m.
If all unique patterns in TD have the same probability
of occurrence and m is a power of 2, then the Huffman
tree is indeed a full binary tree; Theorem 2 therefore
implies Theorem 1. Note that Theorem 2 is sufficient
but not necessary for lH to equal lE. Figure 3 shows a
Huffman tree for which lH D lE D 2, even though it is
not a full binary tree.
The practical implication of Theorem 1 is that
Huffman encoding will be less useful when the probabilities
of all of the unique test patterns are similar.
This tends to happen, for instance, when the ratio of the
number of flip-flops to the number of primary inputs
in the CUT, denoted in Section 4, is low. Theorem 2
suggeststhatevenwhen ishigh, theprobabilitydistri-
bution of the unique test patterns should be analyzed to
Fig. 3. An example of a non-full binary tree with l D lE.
determine if statistical encoding is worthwhile. How-
ever, in all cases we analyzed where was high, statistical
encoding was indeed effective.
Comma Codes
Although Huffman codes provide optimal test set com-
pression, they do not always yield the lowest-cost decoder
circuit. Therefore, we also employ a non-optimal
code, namely the Comma code, which often leads to
more efficient decoder circuits. The Comma code, also
prefix-free, derives its name from the fact that it contains
a terminating symbol, e.g. 0, at the end of each
codeword.
The Comma encoding procedure first sorts the
unique patterns in decreasing order of probability of
occurrence, and encodes the first pattern (i.e., the most
probable pattern) with a 0, the second with a 10, the
third with a 110, and so on. The procedure encodes
each pattern by addinga1tothebeginning of the previous
codeword. The codeword for the ith unique pattern
Xi is thus given by a sequence of .i 1/ 1s followed by
a 0. Comma codewords for the unique patterns in the
example test set of Section 2.1 are listed in Column 5 of
Table
1. This procedure has complexity O.m log m/and is simpler than the Huffman encoding procedure.
The Comma code also requires a substantially simpler
decoder DC than the Huffman code. Since each
Comma codeword is essentially a sequence of 1s followed
by a zero, the decoder only needs to maintain
a count of the number of 1s received before a 0 signi-
fies the end of a codeword. The 1s count can then be
mapped to the corresponding test pattern.
For a given test sequence TD with m unique patterns
having probabilities of occurrence p1 p2
pm, the aPverage length of a Comma codeword is
given by lC D imD1 ipi. Since the code is non-optimal,
lC lH. However, the Comma code provides near-optimal
compression, i.e., limm!1.lC lH/ D 0, if
TD satisfies certain properties. These hold for typical
test sequences that have a large number of repeated
patterns. We first present the condition under which
Comma codes are near-optimal and then the property
of TD required to satisfy the condition.
A binary tree with leaf nodes X1; X2;:::;Xm is
skewed if the distance di of Xi from the root is given
by
di D (1)
Deterministic Built-in Pattern Generation 101
For instance, the Huffman tree of Fig. 2 is a skewed
binary tree with four leaf nodes.
Theorem 3. Let p1 p2 pm be the probabilities
of occurrence of the m unique patterns in TD.
Let lH and lC be the average length of the codewords
for Huffman and Comma codes; respectively. If the
Huffman tree for TD is skewed then lC lH D pm and
limm!1.lC lH / D 0.
Proof:PIf the Huffman tree for TD is skePwed then
lH D imD11 ipi C .m 1/pm and lC D imD1 ipi.
Therefore, lC lH D pm. We also know that p1
p2 pm. Therefore, 0 pm 1=m, which implies
that limm!1 pm D 0. Hence lC lH is vanishingly
small for a skewed Huffman tree and the Comma
code is near-optimal.
Next we derive a necessary and sufficient condition
that TD must satisfy in order for its Huffman tree to be
skewed.
Theorem 4. Let p1 p2 pm be the probabilities
of occurrence of the unique patterns in TD.
The Huffman tree for TD is skewed if and only if; for
the probabilities of occurrence satisfy
the condition
Xm
Proof: We prove sufficiency of the theorem. The necessity
can be proven similarly. Generate the Huffman
tree for the m patterns X1; X2;:::;Xm in TD whose
probabilities of occurrence satisfy (2). Let the leaf node
corresponding to the ith pattern be vi . The two leaf
nodes vm and vm1 corresponding to the patterns Xm
and Xm1 with the lowest probabilities pm and pm1
are first selected, and a parent node vm.m1/, with the
probability .pm C pm1/, is generatePd for them. Now,
pm3 pm2, andfrom(2), pm3 mkDm1 pk.Thus,
pm3 pm1 C pm. Therefore, the leaf node vm2
and vm.m1/ are now the two nodes with the lowest
probabilities, and a parent vm.m1/.m2/ with probability
.pm C pm1 C pm2/ is generated for them.
Similarly, a parent vm.m1/i is generated for nodes
vi and vm.m1/.iC1/;i 2f.m3/1g. The process
terminates when the root vm.m1/1 is generated for leaf
node v1 and vm.m1/2. The distance d1 of v1 to the root
is therefore 1. Similarly di D i, i 2f2;3;:::;m1g.
Leaf nodes vm and vm1 are equidistant from the root,
Iyengar, Chakrabarty and Murray
since they share a common parent vm.m1/, thus dm D
m 1. Therefore, di satisfies (1) for i 2f1;2;:::;mg,
and the Huffman tree is skewed.
We next determine the relationship between jTDj and
m, the number of unique patterns in the test set when
the Huffman tree is skewed. We show that jTDj must
be exponential in m for the Huffman tree to be skewed.
This property is often satisfied by deterministic test sets
for sequential circuits with a large number of flip-flops
but few primary inputs.
Theorem 5. Let jTDj and m be the total number of
patterns and the number of unique patterns in TD; res-
pectively. If the Huffman tree for TD is skewed then
denotes an asymptotic
lower bound in the sense that f .m/ D !.g.m// implies
limm!1 gf.m/ D1.
Proof: Let f1 f2 fm be the numbers
of Poccurrence of the unique patterns in TD. Then
jTDjD imD1 fi. We know that fm1 fm 1, and
fm2 fm 1. From (2), fm3 fm1 C fm 2. Sim-
ilarly, fm4 3 and fm5 5. The lower bounds on
thus form the FibPonacci series
3; 5;:::I therefore, jTDj1C imD1si, where si is
the ith Fibonacci termp, given by si D p1p.'iC1
'O iC1/; where ' D 1 .1 C 5/ and 'O D 1 .1 5 5/ [21].
'Om). For even m,1
from which it follows that jTDjD!.1:62m/. The proof
for odd m is similar.
Comma codes, being non-optimal, do not always
yield better compression than equal-length codes. The
following theorem establishes a sufficient condition
under which Comma codes perform worse than equal-length
codes.
Theorem 6. Let p1 p2 pm be the probabilities
of occurrence of the unique patterns in TD.
2dlog me
If pm > m.mC2 1/ ; then lC > lE ; where lC .lE / is the
average codeword length for Comma .equal-length/
coding.
Proof: We know that lE Ddlog2 me and lC D p1 C
2p2 CCmpm. Since p1 p2 pm,lC
if 12 pmm.m C 1/>dlog2 me, from which the theorem
follows.
For example, in the test set for s35932 obtained
using Gentest, m D 86 and pm D 0:012, while
2dlog me=86.87/ D 0:0019. Hence, Comma codingperforms worse than equal-length coding for this
test set.
Note that Theorem 6 not provide a necessary condition
for which lC > lE. In fact, it is easy to construct
data sets for which lC > lE even though pm
2dlog me
. The following theorem provides a tighter con-
m.mC1/
dition under which Comma codes perform worse than
equal-length codes.
Theorem 7. Let p1 p2 pm be the probabilities
of occurrence of the unique patterns in TD; and
let lC .lE / be the average codeword length for Comma
coding .equal-length coding/. Let fi D mini f ppiCi 1
me
Proof: We first note that pi fipiC1 Pfi2 piC2
fimi pm,1i m. Since lC D imD1 ipi,it
follows that lC fim1 pm C 2fim2 pm C 3fim3 pm C
C.m1/fipm C mpm. Let E be defined as
such that lC E. From (4), we get
From (4) and (5), we obtain
me
fim1fi.mC21/Cm and the theorem follows.
If m 1 then (3) can be simplified to pm
fi.fim1/2mdlo.fig2m1e/ . In addition, if mini f ppiCi 1 g exceeds 2, i.e.
Table
2. Huffman and Comma code words for the patterns in
the test set of s444.
Test Occur- Probability of Huffman Comma
pattern rences occurrence codeword codeword
every data pattern occurs twice as often the next most-frequent
data pattern, then we can replace fi by2in
(3) to obtain the following simpler sufficient condition
under which Comma codes perform worse than equal-length
codes.
Corollary 1. Let p1 p2 pm be the probabilities
of occurence of the unique patterns in TD; and
let lC .lE / be the average codeword length for Comma
coding .equal-length coding/.IffiDmini f ppiCi 1 g > 2
dlog me
and pm > 2m12m2 ; then lC > lE .
However, the skewing probability distribution property
of Theorem 4 appears to be easy to satisfy in most
cases. The probabilities of occurrence of patterns for a
typical case (the s444 test set) are shown in Table 2 in
Section 3. The decrease in compression resulting from
the use of Comma codes, instead of optimal (Huffman)
codes to compress such test sets, which is given by
Deterministic Built-in Pattern Generation 103
lC lH D pm from Theorem 3, is extremely small in
practice. For the s444 test set, pm D 0:0005;lH D
1:2121, and therefore lC D 1:2126, and the compression
loss is only one bit. Therefore, both Huffman and
Comma codes can efficiently encode sequential circuit
test sets.
3. TGC Design
In this section, we illustrate our methods for constructing
employing statistical encoding of precomputed
test sequences. We illustrate the steps involved
in encoding and decoding with the test set for the s444
benchmark circuit as an example.
Huffman Coding
The first step in the encoding process is to identify the
unique patterns in the test set. A codeword is then
developed for each unique pattern using the Huffman
code construction method outlined in Section 2. The
Huffman tree used to construct codewords for the patterns
of s444 is shown in Fig. 4. The unique test patterns
and the corresponding codewords for s444 are
listed in Table 2. The original (unencoded) test set TD,
which contains 1881 test patterns of 3 bits each, requires
bits of memory for storage. On
the other hand, the encoded test set has only 1.2121
bits per codeword, and hence requires only 2280 bits
of memory. Therefore, Huffman encoding of TD leads
to 59.59% saving in storage, while both the order as
well as the contiguity of test patterns are preserved.
Once the encoded test set TE is determined by applying
the Huffman encoding procedure to TD,itis
Fig. 4. Huffman tree for the test set of s444.
104 Iyengar, Chakrabarty and Murray
Fig. 5. Illustration of the proposed test application technique.
stored on-chip and read out one bit at a time during
test application. The sequence generator SG of Fig. 1
is therefore a ROM that stores TE. The test patterns
in TD can be obtained by decoding using a simple
finite-state machine (FSM) [20]. Table-lookup based
methods that are typically used for software implementations
of Huffman decoding are inefficient for on-chip,
hardware-implemented decoding.
The decoder DC is therefore a sequential circuit, unlike
for combinational and full-scan circuits where a
combinational decoder can be used [12, 22]. Figure 5
outlines the proposed test application scheme. We exploit
the prefix-free property of the Huffman code; thus
patterns can be decoded immediately as the bits in the
compressed data stream are encountered. We next describe
the state diagram of the FSM decoder DC using
the s444 example.
Figure
6 shows the state transition diagram of DC.
The number of states is equal to the number of non-leaf
nodes in the corresponding Huffman tree. For
example, the Huffman tree of Fig. 4 has seven non-leaf
nodes, hence the corresponding FSM of Fig. 6
has seven states-S1; S2;:::;S7. The FSM receives
a single-bit input from SG, and produces n-bit-wide
test patterns, as well as a single-bit control output
TEST VEC. The control output is enabled only when
a valid test pattern for the CUT is generated by the
decoder-this happens whenever a transition is made
to state S1. The use of the TEST VEC signal ensures
that the test sequence TD is preserved and no additional
test patterns are applied to the CUT. Hence Huffman
codes provide an efficient encoding of the test patterns
Fig. 6. State transition diagram for the FSM decoder of s444.
and a straightforward decoding procedure can then be
used during test application. The trade-off involved
is the increase in test application time t since the decoder
examines only one bit of in each clock cycle.
Fortunately, the increase in t is directly related to the
amount of test set compression achieved-the higher
the degree of compression, the lesser is the impact on t.
Theorem 8. The test application time t increases by a
lH is the average length of a Huffman
codeword.
Proof: The state transition diagram of Fig. 6 shows
that wi clock cycles are required to apply a test pattern
Xi which is mapped to a codeword of wi bits. Hence
the test application time (number of clock cycles) is
given by t D iD1 wi , where jTDj is the total number
of patterns in the test set TD. TPhejTtDejst application time
therefore increases by a factor iD1 wi =jTDjDlH, the
average length of a codeword.
Experimental results on test set compression in
Section 4 show that the average length of a Huffman
codeword for typical test sets is less than 2. This implies
that the increase in test application time rarely exceeds
100%. Since test patterns are applied in a BIST
environment, this increase in testing time is acceptable,
and it has little impact on testing cost or test quality.
Figure
7 shows the netlist of the decoder circuit for
s444. This circuit was generated for a test set obtained
using Gentest. The design is simplified considerably
by the presence of a large number of don't-cares in the
decoder specification, which a design automation tool
can exploit for optimization.
The cost of the on-chip decoder DC can be reduced
by noting that it is possible to share the same decoder
on a chip among multiple CUTs. The encoding problem
is now reformulated to encode the test sets of the
CUTs together. We do this by combining these test
sets to obtain a composite test set T 0 and applying the
encoding procedure to T 0 to obtain an encoded test set
Figure
8 illustrates a single sequence generator
SG0 and pattern decoder DC0 used to apply test sets to
multiple CUTs that have the same number of primary
inputs. Note that such sharing of the pattern decoder
is also possible if the CUTs have an unequal number
of primary inputs. The sharing is, however, more effi-
cient if the difference in the number of primary inputs
is small. The slight increase in the size of T 0 (com-
pared to TE) is offset by the hardware saving obtained
Deterministic Built-in Pattern Generation 105
Fig. 7. Gate-level netlist of the FSM decoder for s444.
by decoder sharing. We next present upper and lower
bounds on the Huffman codeword length for two test
sets that are encoded jointly.
Theorem 9. Let TD1 and TD2 be test sets for two
CUTs with the same number of primary inputs and
let TD0 be obtained by combining TD1 and TD2. Let
m1; N1;l1; and m2; N2;l2 be the number of unique pat-
terns; total number of patterns; and average Huffman
codeword length for TD1 and TD2; respectively. Let l0
be the average Huffman codeword length for T 0 and
let m0 and N0 be defined as: m0 D maxfm1; m2g and
In addition; let pi .qi /; 1 i m0; be
the probability of occurrence of the ith unique pattern
Fig. 8. A BIST sequence generator and decoder circuit used to test multiple CUTs.
106 Iyengar, Chakrabarty and Murray
in TD1.TD2/. Then
log flmin 1
l N0 log2 fimax C
where .i/fimax is the largest value of fi such that N1 pi C
is the smallest value of fl such that N1 pi C N2qi
N0flpi and N1 pi C N2qi
Proof:hskip10ptWe use the fact that
H.TD1/ l1 H.TD1/
and H.TD2/ are the entropies of TD1 and TD2, respec-
tively. The probability of occurrence of the ith unique
pattern in TD0 is .N1 pi C N2qi /=N0. The entropy of TD0
is therefore given by
H.T / D log
It follows from the theorem statement that
Therefore,
N0 .l1 log2 fimax/ C N0 .l2 log2 fimax/
D log fimax
Therefore, l0 [.N1l1 C N2l2/=
Next we prove the lower bound. Once again from
the theorem statement,
flmin
Note that the lower bound is meaningful only if pi D
requiresthat TD1 and TD2 have the same set of unique patterns
and therefore m0 D m1 D m2.
Therefore,
Now, H.TD1/ l11 and H.TD2/ l21. Therefore,
l0 H.TD0 / [.N1l1 C N2l2/=N0] log2 flmin 1.
A tighter lower bound on l0 is given by the following
corollary to Theorem 9.
Corollary 2. Let l1 D H.TD1/ C -1 and l2 D H.TD2/
C -2; and let flmin be defined as in Theorem 9; where
As a special case, if N1 D N2 then 12 .l1 Cl2/ log2
For example, let TD1 and TD2 be test sets for two
different CUTs with five primary inputs each. Suppose
they contain the unique patterns shown in Fig. 9, with
N1 D N2. The probabilities of occurrencePof patterns
in TD1 and TD2 satisfy (2), therefore l1 D i4D1 ipi C
4p5 D 1:25. Similarly, l2 D 1:31. From Theorem 9,
fimax D 0:58, and flmin D 3:5: Since N1 D N2, the
bounds on l0 are given by 12 .l1 C l2/ log2 flmin 1
Therefore, 1:03
Fig. 9. Unique patterns and their probabilities of occurrence
for the example illustrating Theorem 9.
l0 3:55. Now, the patterns in T 0 also satisfy (2), and
therefore l0 D 4 ip0 C4p0 D 1:33, which clearly
lies between the calculated bounds, where p0 is the
probability of occurrence of pattern Xi in TD0 .
Experimental results on test set encoding and decoder
overhead in Section 4 show that it is indeed possible
to achieve high levels of compression while reducing
decoder overhead significantly if test sets for
two different CUTs with the same number of primary
inputs are jointly encoded and a single decoder DC0 is
shared among them.
Comma Coding
We next describe test set compression and test application
using Comma encoding. Once again, we illustrate
the encoding and decoding scheme using the s444
example.
The unique patterns in the test set are first identified,
and sorted in decreasing order of probability of occur-
rence. Codewords are then generated for the patterns
according to the Comma code construction procedure
described in Section 2. Comma codewords generated
for the unique patterns in the s444 test set are listed in
Table
2. The probabilities of occurrence of test patterns,
shown in Table 2 clearly satisfy (2) in Theorem 3, and
therefore the encoding is near-optimal. The Comma
encoded test set has 1.2126 bits per codeword, and requires
2281 bits for storage, an increase of only one
bit from the optimally (Huffman) encoded test set described
in Section 3. Hence the reduction in test set
compression arising from the use of Comma codes instead
of Huffman codes for this example is only 0.02%.
The slight decrease in test set compression due to
the use of the Comma code is offset by the reduced
complexity of the pattern decoder DC. Figure 10 illustrates
the pattern decoder for the s444 circuit test set.
The decoder is constructed using a binary counter and
combinational logic that maps the counter states to the
test patterns. The test application scheme is the same
as that in Fig. 5 for the Huffman decoder.
The inverted input bit is used to generate the
TEST VEC signal which ensures that the CUT is
clocked only when a 0 is received. TEST VEC is also
gated with the clock to the CUT and used to reset the
the counter on the falling edge of the clock. Bits with
value 1 received from SG therefore result in the flip-
flops of the counter being clocked to the next state,
while 0s (the terminating commas present at the end
Deterministic Built-in Pattern Generation 107
Fig. 10. The Comma pattern decoder for the s444 test set.
state after half a clock cyle. The test pattern can thus be
latched by the CUT before the counter is reset. Comma
decoders are simpler to implement than Huffman de-
coders, and binary counters already present for normal
operation can be used to reduce overhead. As in the
case of Huffman coding (Theorem 8), the increase in
testing time due to Comma coding equals the average
length of a codeword.
Run-Length Encoding
Finally, we describe run-length encoding of the statistically
encoded test sequence TE to achieve further
compression. We exploit the fact that sequences of
identical test patterns (runs) are common in test sets
for sequential circuits having a high ratio of flip-flops
to primary inputs. For example in the test set for s444,
runs of the pattern 000 occur with lengths of up to
70. Huffman and Comma encoding exploit the large
number of repetitions of patterns in the test sequence
without directly making use of the fact that there are
many contiguous, identical patterns. Run-length encoding
exploits this property of the test sequences-it
therefore complements statistical encoding. Huffman
and comma encoding transform the sequence of test
patterns to a compressed serial bit stream, and in the
test set, each occurrence of the test pattern 000 is
replaced bya0(Table 2). Therefore, long runs of 0s
are present in the statistically compressed bit stream,
108 Iyengar, Chakrabarty and Murray
Table
3. Distribution of runs in the Huffman encoded
test set for the s444 circuit.
No. of runs No. of runs
Run- Run-length
0s 1s length 0s 1s
which can be further compressed using run-length
coding.
Run-length coding is a data compression technique
that replaces a sequence of identical symbols with a
copy of the repeating symbol and the length of the se-
quence. For example, a run of 5 0s (00000) can be
encoded as (0,5) or (0,101). Run-length encoding has
been used recently to reduce the time to download test
sets to ATE across a network [23, 24]. We improve
upon the basic run-length encoding scheme by considering
only those runs that have a substantial probability
of occurrence in the statistically encoded bit stream. A
unique symbol representing a run of a particular length
(and the corresponding bit) is then stored. The value of
the repeating bit is generated from the bits representing
the length of the run during decoding. We therefore
obviate the need to store a copy of the repeating
bit.
We describe our run-length encoding process using
the s444 example. An analysis of its Huffman encoded
test set yields the distribution of runs shown in
Table
3. Encoding all runs would obviously be expensive
(4 bits would be required for each run) since
very few instances of (0,4), (0,5) and (1,3), and no
instances of (1,5), (1,6), (1,7) or (1,8) exist. We therefore
use combinations of 3 bits (000; 001;:::;111)
to encode the 8 most frequently occurring runs-(0,1),
(0,2), (0,3), (0,7), (0,8), (1,1), (1,2) and (1,4). The less-
frequently occurring runs (0,4), (0,5), (0,6) and (1,3)
are divided into smaller consecutive runs for encoding.
For example (0,5) is encoded as (0,3) followed by (0,2).
Figure
encoding applied to a
portion of the Huffman encoded s444 test set. The
encoded runs are stored in a ROM and output to a run-length
decoder. The run-length decoder provides a single
bit in every clock cycle to the Huffman (or Comma)
decoder for test application.
The run-length decoder consists of a binary down
counter, and a small amount of combinational logic.
Figure
12 illustrates the run-length decoder for the
test set. The bits used to encode a run (e.g.,
011 for (0,7)-Fig. 11(a)) are first mapped to the run-length
bits). The run-length is loaded into
the counter which outputs the first bit of the run. The
counter then counts down from the preset value 110 to
000, sending a bit (0, for this example) to the Huffman
decoderineveryclockcycle. Whenthecounterreaches
000, the NOR gate output becomes 1, enabling the
ROM to output the bits representing the next run. Since
one bit is received by the Huffman decoder in every
clock cycle, run-length decoding does not add to testing
time.
4. Experimental Results
In this section, we present experimental results on test
set encoding for several ISCAS 89 benchmark circuits
to demonstrate the saving in on-chip storage achieved
using Huffman, Comma and run-length encoding. We
Fig. 11. Run-length encoding applied to a portion of the Huffman encoded s444 test set: (a) 3-bit encoding
for 8 types of runs, (b) bit stream to be encoded, and (c) run-length encoded data.
Fig. 12. Run-length decoder for the s444
test set.
consider circuits in which the number of flip-flops f is
considerably greater than the number of primary inputs
n; we denote the ratio f=n by . Table 4 lists the values
of for the ISCAS 89 circuits, with circuits having a
high value of shown in bold. Such circuits are especially
hard to test because of the relatively large number
of internal states and few primary inputs. From Table 4
Table
4. The ratio of the number of flip-flops to the
number of primary inputs , and jTDj, the length of the
HITEC test sequences for the ISCAS 89 circuits.
CUT jTDj CUT jTDj
Deterministic Built-in Pattern Generation 109
we see that these circuits typically require longer sequences
of test patterns. On the other hand, they are
excellent candidates for our encoding approach.
Several other ISCAS 89 benchmark circuits do not
have a high value of , and are therefore more suitable
for scan-based testing, than for the proposed approach
of encoding non-scan test sets. We do not present results
for these circuits, however, statistical encoding of
full-scan test sets for these circuits, on the lines of the
proposed approach, has recently been shown to be effective
in reducing the amount of memory required for
test storage [25].
We performed experiments on test sets for single-
stuck line (SSL) faults obtained from the Gentest
ATPG program, as well as the HITEC, GATEST, and
test sets from the University of Illinois
[26]. We measured the fault coverage of these test sets
using the PROOFS fault simulator [27] and ensured
that the coverage is comparable to the best-known fault
coverage for these circuits. We next present results on
the compression achieved using Huffman and Comma
coding for all four test sets. Table 5 compares the number
of bits required to store the encoded test set
that required to store the corresponding unencoded test
set TD. The number of bits required by our scheme is
moderate, substantially less than that required to store
unencoded test sets, and reduces significantly when the
same test set can be shared among multiple CUTs of
the same type included on a chip, as in core-based DSP
circuits [14]. The saving in SG memory presented in
Table
5 is substantial, and in most cases, the difference
in compression due to the use of Comma coding instead
of Huffman coding is very small. In Table 6, we
show that further compression is achieved by applying
run-length coding to TE. We present results on run-length
encoding for the s382 and s444 circuits using
the Gentest test set.
The test application time required is considerably
less than that required for pseudorandom testing, even
though the number of clock cycles C is greater than
the number of patterns in TD (C D lHjTDj for Huffman
coding and C D lCjTDj for Comma coding). Table 7
compares the number of test patterns applied, the number
of clock cycles required, and the fault coverage
obtained for our method, with the corresponding
figures reported recently for two pseudorandom testing
schemes [5, 6]. The test application time required by
our method is much less than for the pseudorandom
testing method of [5]. We also achieve higher fault
coverage for all circuits.
Iyengar, Chakrabarty and Murray
Table
5. Experimental results on test set compression for ISCAS 89 circuits with a high value of .
Average Percentage
codeword length compression
ISCAS
circuit nmjTDjTbits lH lC Hbits Cbits HE CE
Gentest
GATEST
n: No. of primary inputs; m: No. of unique test patterns; Tbits: Total no. of bits in TD; Hbits: No. of bits in after Huffman encoding;
Cbits: No. of bits in after comma encoding;
aComma coding is not applicable for the test set of s35932, because the probabilities of occurrence of the test patterns do not satisfy (2)
given in Theorem 4.
Table
6. Percentage compression achieved by run-length coding after applying Huffman and
Comma encoding to TD.
Number of bits in Percentage compression
ISCAS
circuit Tbits Hbits Cbits HRbits CRbits HE CE HRE CRE
HRbits: No. of bits in the encoded test set after Huffman and run-length encoding; CRbits: No.
of bits in the encoded test set after Comma and run-length encoding.
Deterministic Built-in Pattern Generation 111
Table
7. Number of clock cycles C required and fault coverage obtained using pseudorandom testing
compared with the corresponding figures using precomputed deterministic test sets.
Number of patterns jTDj Number of clock cycles C Fault coverage (%)
ISCAS
circuit [5]a [6]a Deta [5] [6] Det [5] [6] Detb
a[5, 6]: Recently proposed pseudorandom BIST methods; Det: Deterministic testing using precomputed
test sets.
bThe best fault coverage achieved by precomputed deterministic testing.
cResults for these circuits were not reported in [5, 6].
Table
8. Literal counts of the Huffman and Comma decoders for the four test sets.
Decoder cost in literals
Huffman decoders Comma decoders
ISCAS
circuit Gen HIT GAT STRAT Gen HIT GAT STRAT
28
43
s400 44 33 33 37 26 27 27 27
s526 43 47 28 27
Gen: Gentest; HIT: HITEC; GAT: GATEST; STRAT: STRATEGATE.
We next present experimental results on the Huffman
and Comma decoder implementations. We designed
and synthesized the FSM decoders using the Epoch
CAD tool from Cascade Design Automation [28]. The
low to moderate decoder costs in Table 8 show that
the decoding algorithm can be easily implemented as a
BIST scheme. Note that the largest benchmark circuit
s35932 requires an extremely small overhead (synthe-
sized ROM area is 0.53% of CUT area, and decoder
area is 6.18% of CUT area) to store the encoded test
set and decoder, thus demonstrating that the proposed
approach is scalable and it is feasible to incorporate the
encoded test set on-chip for larger circuits.
Note that, while Huffman and Comma encoding reduce
the number of bits to be stored, the serialization
of the ROM may increase the hardware requirements
for ROM address generation. In a conventional
fixed-length encoding scheme, the size of the counter
required for ROM address generation is dlog2 jTDje,
while an encoded ROM requires a dlog2.jTDjl/e-bit
counter for address generation, where l is the average
codeword length. However, since l is small, this
logarithmic increase in counter size is also small, e.g.,
the size of the counter does not change for s444, while
it increases from 7 to 10 for s35932. The hardware
overhead figures in Table 8 do not include this small
increase in counter size.
It may be argued that a special-purpose, minimal-
state FSM may be used to produce a precomputed
sequence. However, we have seen that the overhead
of such FSMs is prohibitive, especially for long test
sequences. In addition, such a special-purpose FSM
wouldbespecifictoasingleCUT;ontheotherhand, the
decoder DC for the proposed scheme is shared among
multiple CUTs, thereby reducing overall TGC overhead
Table
9 compares the overhead of the proposed deterministic
BIST scheme with the overhead of a pseudorandom
BIST method [6] for several circuits. The
overhead for the pseudorandom method was obtained
112 Iyengar, Chakrabarty and Murray
Table
9. Literal counts for the proposed technique compared with
pseudorandom testing.
Deterministic
TGC cost
Pseudorandom Number of
ISCAS 89 Decoder Total TGC cost test points
circuit cost cost [6] [6]
s526
by mapping the gate count figures from [6] to the literal
counts of standard cells in the Epoch library. While the
deterministic TGC requires greater area than the pseudorandom
TGCs, the difference is quite small, and thus
may be acceptable if higher fault coverage and shorter
test times are required. Note also that the pseudorandom
method requires the addition of a large number
of observability test points. These require a gate-level
model of the CUT, as well as additional primary outputs
and routing. Moreover, they may also increase the
size of the response monitor at the CUT outputs. The
proposed TGCs require no circuit modification, thus
making them more applicable to testing core-based designs
using precomputed test sequences.
Finally, we present experimental results for test set
compression and decoder overhead, using a single
decoder to test several CUTs on a chip. Table 10
shows that the levels of compression obtained for combined
test sets are comparable to those obtained for the
individual test sets. In fact, in several cases the overall
compression is higher than that obtained for one of the
individual test sets. The percentage area overhead required
for the decoder reduces significantly, because a
single decoder can now be shared among several CUTs.
Note that in the case of the Comma decoders, a major
part of the overhead is contributed by the binary coun-
ters. For example, in the Comma decoder for the pair
head, while the combinational logic represents only
overhead. If the counter is also used for normal
operation of the system, then the BIST overhead
will reduce further. The test application technique is
therefore clearly scalable with increasing circuit com-
plexity. The decoder overhead also tends to decrease
with an increase in . This clearly demonstrates that
the proposed test technique is well suited to circuits for
which is high.
5. Conclusion
We have presented a novel technique for deterministic
built-in pattern generation for sequential circuits. This
approach is especially suited to sequential circuits that
have a large number of flip-flops and relatively few
Table
10. Percentage compression for test sets encoded jointly.
Percentage Huffman compression Percentage Comma compression
Circuit
Gen HIT GAT STRAT Gen HIT GAT STRAT
Table
11. Decoder cost in literals and percentage decoder overhead for a single decoder shared
among several CUTs.
Decoder cost in literals Percentage decoder overhead
Circuit
Gen HIT GAT STRAT Gen HIT GAT STRAT
fs382; s444g 48 52 44 44 6.72 7.29 6.18 6.21
28
Deterministic Built-in Pattern Generation 113
primary inputs, and for circuits such as embedded 9. M.S. Hsiao, E.M. Rudnick, and J.H. Patel, Alternating Strate-
cores, for which gate-level models are not available. We gies for Sequential Circuit ATPG, Proc. European Design and
have shown that statistical encoding of precomputed Test Conf., 1996, pp. 368-374.
10. T.M.NiermannandJ.H.Patel, HITEC:ATestGenerationPack-
test sequences leads to effective compression, thereby
age for Sequential Circuits, Proc. European Design Automation
allowingon-chipstorageofencodedtestsequences.We Conf., 1991, pp. 214-218.
have also shown that the average codeword length for 11. D.G. Saab, Y.G. Saab, and J.A. Abraham, Automatic Test
the non-optimal Comma code is nearly equal to the av- Vector Cultivation for Sequential VLSI Circuits Using Genetic
erage codeword length for the optimal Huffman code if Algorithms, IEEE Trans. on Computer Aided Design, Vol. 15,
pp. 1278-1285, Oct. 1996.
the test sequence satisfies certain proeprties. These are
12. K. Chakrabarty, B.T. Murray, J. Liu, and M. Zhu, Test
generally satisfied by test sequences for typical sequen- Compression for Built-in Self Testing, Proc. Int. Test Conf.,
tial circuits, therefore Comma coding is near-optimal 1997, pp. 328-337.
in practice. 13. F. Brglez, D. Bryan, and K. Kozminski, Combinational Profiles
Our results show that Huffman and Comma encod- of Sequential Benchmark Circuits, Proc. Int. Symp. on Circuits
and Systems, 1989, pp. 1929-1934.
ing of test sequences, followed by run-length encoding,
14. M.S.B. Romdhane, V.K. Madisetti, and J.W. Hines, Quick-
can greatly reduce the memory required for test stor- Turnaround ASIC Design in VHDL: Core-Based Behavioral
age. The small increase in testing time is offset by Synthesis, Kluwer Academic Publishers, Boston, MA, 1996.
the high degree of test set compression achieved. Fur- 15. J.P. Hayes, Computer Architecture and Organization, 3rd ed.,
thermore, testing time is considerably less than that for McGraw-Hill, New York, NY, 1998.
16. G. Held, Data Compression Techniques and Applications:
pseudorandom methods. We have developed efficient
HardwareandSoftwareConsiderations, JohnWiley, Chichester,
low-overhead pattern decoding methods for applying West Sussex, 1991.
the test patterns to the CUT. We have also shown that 17. M. Jakobssen, Huffman Coding in Bit-Vector Compression,
the overhead can be reduced further by using a sin- Information Processing Letters, Vol. 7, No. 6, pp. 304-307, Oct.
gle decoder to test multiple CUTs on the same chip. 1978.
18. T.M. Cover and J.A. Thomas, Elements of Information Theory,
The proposed technique thus offers a promising BIST
John Wiley, New York, NY, 1991.
methodology for complex non-scan and partial-scan 19. M. Mansuripur, Introduction to Information Theory, Prentice-
circuits for which precomputed test sets are readily Hall, Inc., Englewood Cliffs, NJ, 1987.
available. 20. V. Iyengar and K. Chakrabarty, An Efficient Finite-State
Machine Implementation of Huffman Decoders, Information
Processing Letters, Vol. 64, No. 6, pp. 271-275, Jan. 1998.
21. D.H. Greene and D.E. Knuth, Mathematics for the Analysis of
--R
and the M.
Reliable Computing Laboratory of the Department of Electrical and
Computer Engineering.
University of Illinois at Urbana-Champaign
are in computer-aided design of VLSI circuits and systems
verification and built-in self test
Krishnendu Chakrabarty received the B.
Indian Institute of Technology
and Ph.
Engineering at Duke University.
Professor of Electrical and Computer Engineering at Boston
projects are in embedded core testing
archival journals and referred conference
of IEEE and Sigma Xi
Activities in IEEE's Test Technology Technical Council (TTTC).
Murray received the A.
from Albion College in
in Electrical Engineering from Duke University in
of Michigan in
of the technical staff at General Motors Research and Development
where he led projects in testing
systems and computer architecture.
Adjunct Lecturer at the University of Michigan since
is currently Project Manager of Dependable Embedded Systems in
tools for system safety engineering.
He currently serves on the editorial board of the Journal
of Electronic Testing: Theory and Applications.
--TR
--CTR
A. Chandra , K. Chakrabarty, Efficient test data compression and decompression for system-on-a-chip using internal scan chains and Golomb coding, Proceedings of the conference on Design, automation and test in Europe, p.145-149, March 2001, Munich, Germany
Anshuman Chandra , Krishnendu Chakrabarty, Test Data Compression and Test Resource Partitioning for System-on-a-Chip Using Frequency-Directed Run-Length (FDR) Codes, IEEE Transactions on Computers, v.52 n.8, p.1076-1088, August
Anshuman Chandra , Krishnendu Chakrabarty, Test Resource Partitioning for SOCs, IEEE Design & Test, v.18 n.5, p.80-91, September 2001
Michael J. Knieser , Francis G. Wolff , Chris A. Papachristou , Daniel J. Weyer , David R. McIntyre, A Technique for High Ratio LZW Compression, Proceedings of the conference on Design, Automation and Test in Europe, p.10116, March 03-07,
Ismet Bayraktaroglu , Alex Orailoglu, Concurrent Application of Compaction and Compression for Test Time and Data Volume Reduction in Scan Designs, IEEE Transactions on Computers, v.52 n.11, p.1480-1489, November
A. Touba, Test data compression using dictionaries with selective entries and fixed-length indices, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.8 n.4, p.470-490, October
Anshuman Chandra , Krishnendu Chakrabarty, Analysis of Test Application Time for Test Data Compression Methods Based on Compression Codes, Journal of Electronic Testing: Theory and Applications, v.20 n.2, p.199-212, April 2004 | sequential circuit testing;statistical encoding;run-length encoding;embedded-core testing;BIST;pattern decoding;huffman coding;comma coding |
608855 | Structural Fault Testing of Embedded Cores Using Pipelining. | The purpose of this paper is to develop a global design for test methodology for testing a core-based system in its entirety. This is achieved by introducing a bypass mode for each core by which the data can be transferred from a core input port to the output port without interfering the core circuitry itself. The interconnections are thoroughly tested because they are used to propagate test data (patterns or signatures) in the system. The system is modeled as a directed weighted graph in which the accessibility (of the core input and output ports) is solved as a shortest path problem. Finally, a pipelined test schedule is made to overlap accessing input ports (to send test patterns) and output ports (to observe the signatures). The experimental results show higher fault coverage and shorter test time. | Introduction
Progress in deep submicron VLSI technology enables the integration of large predefined macros or cores
together with user defined logic (UDL) into a single chip. This leads to a design paradigm shift from
single ASIC design to system-on-chip design with full scale leverage of third-party intellectual property
(IP)[BhGV96][De97]. There are several advantages of core based design such as reduction in the overall system
design time, productivity increase, accelerating time-to-market, and increasing competitive superiority.
Design and test of core-based systems is a very important and challenging problem facing the semiconductor
industry for the next several years. A major difficulty concerns accessibility of embedded cores from the I/O
terminals of the system. This entails mapping of the stand-alone test requirements, provided by the core
vendor, into the embedded cores. The basic technique suggested by many R&D groups is to access each
embedded core for testing in isolation from the others [KoWa97, Vsi97, BhGV96]. However, there are a
number of disadvantages with the isolation approach. Isolation testing does not test the system-on-chip as a
whole, for example it does not address the testing of interconnects and interfaces between cores. It does not
consider the interaction of cores for testing such as the effect of testing one core on surrounding cores, or vice
versa (the surrounding cores also may affect the core under test). Implementing protection safeguards will be
very costly increasing the test overhead of the isolation method even further [ToPo97].
1.1 Background
Loosely speaking, a core is a highly complex logic block which is fully defined, in terms of its behavior, also
predictable and reusable [ChPa96]. Cores are distinguished into several categories in terms of design flexibility,
IP protection, test development, programmability and other characteristics. Soft cores are reusable blocks in
terms of synthesizable RTL description. Firm Cores are reusable blocks supplied in netlist of library cells
and for a range of technologies. Finally, Hard cores are reusable blocks optimized in terms of power and
performance, and supplied in layout form for a specific technology. Clearly, soft cores are the most flexible
type but hard cores provide most IP protection. From test development viewpoint, cores can be mergeable
or non-mergeable. Mergeable cores use an expandable test scheme and thus they can be merged with other
mergeable cores so that the composite structure is tested as a whole. In terms of programmability, cores can
be characterized in fully programmable (e.g. microprocessor cores), partially programmable (e.g. Application-Specific
Integrated Processors - ASIP), and little or non-programmable (e.g. ASIC cores).
There are a number of test methods that apply to ASIC cores based on Design for Test (DFT) techniques,
e.g. Scan, BIST, ScanBIST, Test points, or without DFT using precomputed testing. To facilitate test
integration there are a number of preliminary proposals from the industry, e.g using test sockets, Macro test
or core-level boundary scan [KoWa97]. We remark that the above issues are not decoupled of each other, in
fact they are related and can have significant effect on system testing. Also, the above issues apply to both
soft and hard cores. However, hard cores limit the flexibility of the designer significantly for system-level test
solutions.
The basic testing strategy suggested by industrial groups for system-on-chip is to test each embedded core
one by one rather than the system as a whole. This strategy requires accessibility, i.e. controllability and
observability of each core I/Os from the system I/Os. There are number of core isolation techniques proposed
[ImRa90] to ensure accessibility and some of them may provide good match to a particular core's internal test
method:
MUX-based isolation of each core mapping precomputed tests,
ffl A boundary scan type of approach accessing the cores within the system-on-chip - this can apply to
cores with embedded Scan or BIST DFT.
ffl A test wrapper or collar DFT hardware inserted for isolation,
In isolation method, a global BIST controller is usually employed to test schedule cores with embedded
BIST structures to shorten the test time. A test bus has also been proposed to affect accessibility. All of these
techniques proposed have a number of disadvantages. They may incur significant overhead of isolation-related
test structures. Some performance and possibly power consumption penalty is also incurred due to these
structures. Moreover, the isolation techniques do not address testing the system as a whole. Specifically, the
faults in the interfaces (interconnects, and user logic) between cores remain undetected.
The above shortcomings of isolation techniques motivate a coordinated approach to testing system-on-chip.
The basic goal is to test the system as a whole, this means testing the cores themselves as well as testing their
interface. Any type of cores in the system (e.g. soft, hard, etc.) for which a test data set is available (predefined
or deterministically/empirically computable) can be handled by our method. The main contribution of our
work is twofold. First we define the "bypass" for each core by which the data can be transferred from a core
input port to the output port without interfering with the core circuitry itself. Since the test data travel
in the existing interconnections, the core interface are thoroughly tested. Second, we model the core-port
accessibility problem as finding the shortest path in a directed weighted graph and minimize the testing time
by overlapping the time consumed to access paths. Conceptually, our method is a generalization of the scan
approach at the system level by allowing the use of system interconnects, with various bit widths, for test data
distribution and signature collection.
This paper is organized as follows. Section 2 presents the bypass mode and test overhead cost. Section
3 models the accessibility problem as finding the shortest path in a directed graph. Section 4 discusses the
algorithm to overlap execution of those paths to minimize the test time and details a design example step by
step. The experimental results are in Section 5. Finally, the concluding remarks are summarized in section 6.
Using Bypass Mode
2.1 Core Environment
We distinguish faults with respect to the core environment. Given Core A, by definition, the environment of
Core A is all the input/output connections from/to primary inputs/outputs and other cores in the system.
Using ticker lines, we have shown the environment of Core 1 in the system pictured in Figure 1.
System
Primary
Inputs
System
Primary
Outputs
Figure
1: Environment of Core 1.
We also differentiate between isolated core and isolated core environment. Isolated core refers to the core
itself (shaded core in Figure 1 for example), without any other components of the system, which is isolated
from the system by a mechanism, such as multiplexors or tri-state buffers and so on. On the other hand, as
Figure
1 shows for Core 1, the isolated core environment includes the core and all connections to/from it,
all isolated by an appropriate mechanism. This distinction plays an important role in our discussion since
for example a fault in the interconnect of a core can not be captured in isolated core testing while it may be
caught when testing the isolated core environment.
2.2 Input/Output Test
The overall objective is to use the existing wires and topology of the system to establish a path to carry test
data between two test points in the system, called source and sink. In core testing, there are two types of test
paths, that is input test path to access core input ports from system primary inputs and output test path to
access core output ports from the system primary outputs. Figure 2(a) shows a core under test (Core
these two paths and their corresponding source and sink. It also shows a general view to establish a route
between two test points using the existing wires. In Figure 2(b) the blowup picture of a core in input/output
test path (Core k) is shown. We symbolically showed that the inputs are bypassed to the output without
interfering the core circuitry which are used in the normal mode. We will shortly elaborate on matching
the bit widths (packetization) and the real implementation of bypass circuitry. The basic idea of having the
bypass mode for each core is to have an independent route around a core to carry test data (e.g. predefined
test patterns or core signatures) between port i (m i k bit wide) and port j (n j k bit wide) of that core (Core k).
Our goal is to establish the shortest path (fastest route) to carry the packets of test data between source
and sink. Note that by this formulation, accessibility of the core inputs from system primary inputs and of the
core outputs from system primary outputs, are similar problems, i.e. identifying the shortest path between
source and sink.
(Under Test)
Source
Source Sink Sink
Input Test Path Output Test Path
b_in b_out
Bit length of test data
System
Primary Output
System
Primary Input
(a)
(b)
Bit-Match
port j
port i
i_k j_k
Figure
2: Test paths: (a) Input/output test paths; (b) A typical core in test paths.
The important benefit of the bypass is to take advantage of the existing connections among the cores and
the existing wires to transfer multi-bit data from source to sink. This, in the worst case, will be equivalent to
the conventional scan which transfers the patterns serially using a separate routing in the system. Considering
the multi-bit interconnections among the cores and the fact that we do not use a separate scan chain, we
expect the average case to be quite superior in terms of time and comparable in terms of hardware overhead.
The "bypass" mode that we use here is different from identity mode (I-mode) introduced by Abadir and
Breuer [AbBr85] in many ways. In [AbBr85] the authors define the I-mode and I-path for RTL components
(e.g. ALU's, MUXes and Registers) to transfer data unaltered from one port to another. For example, for
an adder having one data value as zero creates the I-mode. In our approach, we have physical bypass routes
through which the data is transferred from one point to another. Additionally, the I-mode and many other
mode definitions, including transfer mode (T-mode) and sensitized mode (S-mode), all control components
to efficiently realize one form of partial scan test. These modes and I-paths are used for transmitting data
from scan registers to the input ports of a block under test and transmitting signatures to scan registers to
be shifted out. Our approach does not use scan registers at all. It offers totally different test methodology
by allowing the use of system interconnects, with various bit widths, for test data distribution and signature
collection.
We would like also to point out that many core providers, such as Philips [MABD98][Mari98], have already
devised the bypass mode as a part of P1500 core test standardization. Such standard is well justified by
providing considerable flexibility to the designer with a reasonable cost. Additionally, the bypass mode can
be easily incorporated within soft and hard cores by core providers [MABD98]. If a core does not come with
bypass mode, this mode and the required bit-match circuitry (explained in this section) should be added
externally by a designer for the benefit of testing.
2.3 Packetizing Technique for Data Transfer
Figure
2(a) clearly shows that the bit width of inputs/outputs of cores change in a path between two test
points. This requires sort of bit matching. Let's assume that we need to transfer a b bit pattern between
source and sink.
In general, to transfer b bit test data from port i (m i k bit) to port j (n j k bit) of Core k we need to packetize
data (to match the available bit width) and send it in several iterations. For example, a core with a
bit input port test data in 4 iterations. We assume that data packetization and transfer
is synchronized with the system clock. Note also that from Core k point of view it does not matter how b-bit
data gets to its inputs. If they all come at once (e.g. or through many packets (e.g. b ?
number of iterations (cycles) that Core k consumes to bypass data will not change. However, it will affect the
n/m Stages
S/P circuit (m <= n)
m-bit
m-bit
m-bit
Figure
3: Serial-to-Parallel (S/P) bit match circuit.
scheduling of bypass activities. Before we discuss this we present the cost (hardware and time overhead) due
to bypass and bit matching circuit. For simplicity, we have not shown i k and j k subscripts in the bit width
variables.
2.3.1 Case 1: Input - Output (m - n) - Serial-to-Parallel
Figure
3 shows the first type of the bit-match circuit required to assemble a larger pattern from different
packets of data. The circuit consists of d n
e stages of cascaded m-bit registers (e.g. D flip flops) controlled by
the same clock. The circuit, which is like a shift register bank of m bits, has a serial-to-parallel (S/P) behavior
whose worst case (in terms of time) corresponds to equivalent to the traditional scan-in discipline.
Note that the d n
e packets of m bit data will be parallelized to n bit in d n
cycles. For a b-bit data
we need to iterate d b
times (overall d n
Briefly, based on this implementation, to
bypass b-bit data from an m-bit input port to an n-bit output port of a core we have the following cost values:
Time Cost: d b
Area
(1)
Where DFF denotes a D-type flip flop which is one possible implementation of 1-bit register. These equations
clearly shows that depending on b the participating core may spend less or more time in bypassing data and
some of the output wires (out of n) may not be needed. For example, suppose the
data is transferred in 1 cycle using two out of eight available output wires. If the data is transferred in
8 cycles using all eight wires.
2.3.2 Case 2: Input ? Output (m ? n) - Parallel-to-Serial
Figure
4 shows the second type of the bit-match circuit required to disassemble a large pattern to different
packets of data. The circuit consists of n stages of parallel d m
multiplexors and 1-bit registers
(e.g. D flip flops) controlled by the same clock. The circuit has a parallel-to-serial (P/S) behavior whose worst
case (in terms of time) corresponds to equivalent to the traditional scan-out discipline. Multiplexors
need dlog
e)e bit for select lines. Since the actual data to be transferred depends on b we assumed a
self-starting counter (controlled by the test controller) controls the number of iterations actually needed.
Note that the d m
e packets of n bit data will be produced in d m
clock cycles. For a b-bit data we need
to iterate d b
Briefly, based on this implementation, to bypass b-bit
data from an m-bit input port to an n-bit output port of a core we have the following cost values:
m/n
m/n
m/n
Log m/n
Stages
P/S circuit (m > n)
1-bit
1-bit
1-bit
Counter
Self-starting
Figure
4: Parallel-to-Serial (P/S) bit match circuit.
Time Cost: d b
Area
(2)
Again depending on b the participating core may spend less or more time in bypassing data and some of
the input wires or MUX inputs (out of m) may not be needed. For example, suppose 2. If
the data is transferred in 1 cycle using two out of eight available input wires. If the data is
transferred in 8 cycles using all eight wires.
Equations 2 and 1 can be joined together as follows:
Time Cost: d b
minfm;ng
e cycles
Area
2.3.3 Complete Bypass Circuitry
In addition to the S/P or P/S bit match circuit, the complete bypass circuitry includes tri-state buffers and
few additional logic gates to control the core activities as shown in Figure 5. Tri-state buffers are needed to
protect the core when it bypasses data. This is a safeguard mechanism to ensure that in bypass mode the core
does not receive any new data and so does not change the state.
Note that many input ports can be bypassed to a single output port but one at a time. Logically speaking
we have
1. Although in Figure 5 we showed the bypass circuitry for bypassing port i to
port j only, "\Delta \Delta \Delta" shows that we may have additional bypass routes. This will be decided by the shortest path
algorithm that we will explain in section 3. In summary, any new routes to bypass an input port to port j
adds, one AND gate, one buffer and one entry to the NOR gate. The introduction of the glue logic
among cores in order to bypass test data may slightly degrade the timing characteristics of the core due to its
additional delay. In our experiments, using 0.8-micron CMOS library in the COMPASS Design Automation
tool [Comp93], this additional delay was less than 3.5 nano-second.
(S/P or P/S)
Bit Match
(Port
port
3-State Buffer
Enable
Core k with port i to port bypass circuit
BUS
(Port i)
Input
stop_k
Figure
5: Core bypass circuit (only port i to port j bypass is shown).
Mode stopk bypassk Function of Core k
Run Bypass
Table
1: Different modes of each modified core.
When bypass the core accepts the inputs and forwards the output to the output port. The core
performs its functionality in this mode, thus we call it "normal mode". When core is under test (even in
still it has to perform its normal functionality. bypass on the other hand, disconnects the
input to the core and bypasses data to the output port based on the selected route determined by bypass ij k .
The test controller has to make sure (by
there will not be any conflict of bypassing different input ports to one output port simultaneously. Signal
allows the global "clock" to reach the core and if stop masks the clock and so the core does
not leave its present state. This signal can be used in interactive testing to freeze the system temporarily to
read out the core outputs. This issue is not pursued in this paper. Note that, for the purpose of the normal
or bypassing operations we could have considered bypass . However, we intentionally separated them
for each core to provide the capability of bypassing data even when the core is under test. By doing so, we
will be able to bypass data through Core k to test other cores even when Core k itself is under test. This will
reduce the overall test time specifically when the application test time for a core is too long. We will clarify
this later in section 4. Table 1 summarizes the operation modes.
2.4 Cost Calculation for Source-Sink Paths
In the previous section we presented two cost functions for time and area (see Equation 3) of the bypass
circuitry pertaining to single cores. In this work we decided to focus only on the test time optimization. Other
heuristics can be proposed based on the area cost to look at the system testing from another angle.
Based on Equation 3, when core k participates in a test path (by bypassing data between
and test data between source and sink its time overhead will be proportional
Source
(1st test point)
(2nd test point)
ij_k
b: bitwidth of test data (input pattern OR output signature)
Cycles
i_k j_k
Figure
The pipeline-like structure for each source-sink path.
Cycles
Cores
(a)
Char.
Interface Cost: High
Cycles
Cores
(c)
Char.
Interface Cost: Low
Cycles
Cores
(b) C2
. Char. Function: 2(2C1
Interface Cost: Medium
Source
(1st test point)
(2nd test point)
Figure
7: An example of bypass scheduling.
e cycles. However, the distribution of its -cycle activities is another issue. The
whole path, as shown in Figure 6 is similar to a pipeline system with N stages in which each stage requires
cycles. The difference, however, with conventional pipelines is in the scheduling method. In conventional
pipelining [Ston90], we define the pipeline clock period to be equal to the slowest stage delay and then schedule
the activities accordingly. In our problem, we don't want to devise too many registers in the interface between
cores to pile up all data packets. Instead, we have to implement an innovative mechanism by which the
bypassing is performed as soon as a packet of the appropriate size is ready.
To make this point clear let's consider the example of Figure 7 in which the sink is an input port of a core
under test and requires a test data. There are three cores in the source-sink path with time cost
of 4, 4 and 2, respectively. These cost values correspond to the time overhead required to packetize (serial
to parallel or parallel to serial) the test data. For example, 16-bit test data would be dis-assembled into four
packets (of 4-bit each) in 4 cycles to transfer through Core 1. Three bypass scheduling choices are shown.
We used space-time table similar to the reservation table [Ston90] in pipelining. Each row corresponds to
a core and each column corresponds to a time step. An entry (C1, C2 or C3) in the table shows that the
corresponding core is bypassing a packet of data in that cycle. For example, in all three schedules shown in
this figure Core 1 bypasses a packet of 4-bit data in the first four cycles. In Figure 7(a), we are not taking
advantage of the pipeline-like structure at all. Activity of one core starts when the previous one is finished.
More importantly, we need an expensive hardware interface between cores to pile up all packets (at most four
packets of 4-bit data) of data before sending it to the next core in the path, obviously a bad choice. Figure 7(b)
pictures a case that we need cheaper interface (at most two packets of 4-bit data) and it consumes 7 cycles.
Figure
7(c) pictures the superior choice which minimizes the cost (e.g. registers used anyway in bit match
circuits) and the data transfer time into 6 cycles. We will shortly show that finding such optimized schedule is
possible by constructing the characteristic function of the path and factorizing it as much as possible starting
from the outside without changing the order of core variables. These functions are shown also in Figure 7.
From above argument and example, it's clear that for a given shortest path the bounds for total bypass
scheduling time for a path consists of N cores are:
Upper Bound:
e cycles
Lower
where i and j are assumed to be single specific input and output ports of Core k, respectively, through which
data is bypassed.
2.4.1 Path Characteristic Function
The bounds presented in equation 4 can be used as data transfer time evaluation heuristics to identify the
shortest path between two test points. Although, the upper and lower bound may suggest different solutions.
For example, suppose there are two paths between a source and a sink. The first path has four cores with
's equal to 4, 4, 3 and 3. The second path has three cores with 's equal to 6, 1 and 1. The upper
bound selects the second path as shortest (cost of 6 compared to 4 while the
lower bound heuristic chooses the first path as the shortest (cost of 4 compared to 6
Briefly speaking, the overall time cost of bypass scheduling depends on the time distribution of activities
among cores. This example shows that we need a mechanism to evaluate the actual time needed for bypass
scheduling when the data is packetized based on the available bit widths and transferred from one core to
another. This is needed not only for overall time evaluation but also to have the complete bypass schedule for
the test controller in test session to tell cores how to behave. To do this, we defined the path characteristic
function (pcf).
The pcf is written by starting from
then factorizing the coefficients starting from the
outmost possible factor and continuing inside. Any other type of factorizing would lead to different sub-optimal
schedule. In the example of Figure 7 we start from equation 4C1 This corresponds to
the schedule of Figure 7 (a). We interpret each "+" in the pcf as sign of sequential activity. So, the function
here means, first Core 1 has to bypass data for 4 cycles, then Core 2 bypasses data for 4 cycles, and finally
data for 2 cycles. If we factorize 2 out to become 2(2C1+2C2+C3) the pcf corresponds to the
schedule of Figure 7 (b). Note that the factor out of the parenthesis reflects the number of repetitions of that
sequencing, each starts p cycles after another where p is the largest coefficient inside the parentheses (2 for
this pcf). Finally, if we continue factorizing the terms to: 2(2(C1 +C2)+C3), the pcf corresponds to Figure
7 (c) which consumes less time (6 cycles) and requires cheaper interface. According to 2(2(C1
suggests a schedule in which Core 2 bypasses data after Core 1. This sequence is repeated twice followed by
bypassing data by Core 3. Finally, the whole thing is repeated twice. Briefly, the general formula for a pcf
function is:
I
Note that in this function superscript r denotes the level factorized in the function. R r is the coefficient
outside of the parenthesis after factoring it out and f
I are the factorized terms inside the rth
f r
Table
2: Example of pcf recursive formula.
level of parenthesis. The form of the inner-most terms will be as f 0
similar to 2C1 or C3 terms
appeared in example of Figure 7. The following recursive formula is a simple way of computing the overall
scheduling time (T (f r )) based on the pcf function:
I
As an example, consider the pcf corresponding to the example of Figure 7 (c), which is 2(2(C1+C2)+C3).
Table
shows how we get T (f r as the overall scheduling time:
Graph Modeling and Shortest Path Problem
Our objective is to model the port accessibility of cores within a system as a directed weighted graph in which
the shortest path between any two points (called source and sink) reflects the fastest route to transfer packets
of data between those two points. From testing perspective, with such model we can find the fastest route to
transfer test data (predefined or random pattern) from the system primary inputs to any of the core input
ports. Similarly, we can find the fastest route to transfer test data (signature) from any of the core output
ports to the system primary output.
Equation 3 defines the cost associated with bypassing data from P ort i to P ort j of a core. So, in our graph
modeling, a node corresponds to a port and an edge corresponds to the interconnects between ports or the
bypass possibilities. Those edges reflecting the bypass choices form a bipartite subgraph for each core whose
cost (weight) will be determined based on Equation 3. The cost of an edge corresponding to the bypass delay
shows the time required to transfer the packetized data from one point to another. The time cost of the
existing interconnections between cores is assumed to be zero since no additional circuit/delay for packetizing
or transfer-control is needed.
Figure
8(a) shows a system, made of four cores with different ports and bit widths, under test. For
consistency, we showed the test pattern generator (TPGR) and signature analyzer (MISR) also as cores. The
system has four cores, two primary inputs (going to Core 1 and Core 4) and three primary outputs (from Core
2, Core 3 and Core 4).
Figure
8(b) shows the corresponding Core Bypass Graph (CBG). Recall from Equation 3 that the time cost
depends on the bit width of test data (b) that we desire to transfer. So, depending on the bit width of test
data (b) different cost values should be used in finding the shortest paths. As an example, in Figure 8(b) near
each edge we have shown two cost values. The cost values outside parenthesis are t ij
e
showing the time overhead to transfer 8-bit test data. Similarly, the cost values inside parenthesis are t ij
TPGR MISR1616816816168164 Sink
(a)
Global
Global
Source
(b)
TPGR MISR
Time Cost:
Global
Global Source
Figure
8: A core-based system: (a) Bit widths; (b) The CBG graph.
for
repeat f
Select an unmarked vertex vq such that sq is minimal;
Mark vq
foreach(unmarked vertex
until (all vertices are marked);
Figure
9: Dijkstra Algorithm
d
e reflecting the time overhead to transfer 16-bit test data. All edges without a cost value
correspond to the existing interconnect between cores and are assumed to have time cost of zero because if we
ignore resistance and capacitance of wires, any packet of data will be transferred almost immediately.
As for the shortest path algorithm itself, there are many well-known solutions proposed in the literature,
graph theory and operation research texts. Some of them are, Bellman, Dijkstra Bellman-Ford and Liao-Wong
algorithms [DeMi94] [BoMu76] whose running time is between O(n 2 ) to O(n 3 ), where n is the number of nodes
or edges, and their capability is between finding single shortest path to finding the shortest path between all
pairs. In our work, we have used Dijkstra algorithm which is a greedy algorithm providing an exact solution
with computational complexity of O(jEj+jV jlogjV and jEj are total number of nodes and
edges in the graph, respectively. This is one of the fastest among such algorithms. Cycles are also considered
in the Dijkstra Algorithm.
The pseudocode of the algorithm is shown in Figure 9. The algorithm keeps a list of tentative shortest
paths, which are then iteratively refined. Initially, all vertices are unmarked and fs
Thus, the path weights to each vertex are either the weights on the edges from the source or infinity. Then,
the algorithm iterates the following steps until all vertices are marked. It selects and marks the vertex that is
head of a path from the source whose weight is minimal among those paths whose heads are unmarked. The
corresponding tentative path is declared final. It updates the other (tentative) path weights by computing the
minimum between the previous (tentative) path weights and the sum of the (final) path weight to the newly
marked vertex plus the weights on the edges from that vertex. Details can be found in many graph theory
and synthesis books [BoMu76] [DeMi94].
Our shortest test path algorithm is summarized in Figure 10. The algorithm first constructs the CBG
graph and finds the time cost values corresponding to the edges in CBG. Then, it uses the Dijkstra algorithm
Construct the Core Bypass Graph (CBG);
Compute the time cost values, i.e.
e
Apply DIJKSTRA algorithm to find all input test shortest paths (1 - i - Nin k );
Apply DIJKSTRA algorithm to find all output test shortest paths (1
Figure
10: The shortest path algorithm applied to CBG.
TPGR MISR
Global
Global
Source
Time Cost:
Input Test Shortest Paths:
Output Test Shortest Paths:
Figure
11: Four input/output shortest paths for Core 1 using Dijkstra Algorithm.
to find all the input/output test shortest paths.
To show the process, we continue our running example by applying the Dijkstra algorithm to the CBG
graph of Figure 8(b) for Core 1 only. Note carefully that Core 1 has two 8-bit and two 16-bit ports. So, we
consider the appropriate cost of edges accordingly. That is the cost values outside parenthesis when a test
point is the core 8-bit port and the cost values inside parenthesis when a test point is the core 16-bit port.
The result is summarized in Figure 11 that shows the shortest paths between global source and core input
ports, with thick solid and broken lines. Figure 11 also shows the shortest paths between two output ports of
1 and the global sink using the two different dotted lines.
4 The Structural Testing
After we find all the shortest paths for all core input/output ports, we have to apply the bypass scheduling
method explained in section 2 for each core. Finally, using some path scheduling method we have to combine
these path schedules to overlap the test activities and minimize the test time as much as possible.
The different steps of structural test process are summarized in the pseudocode of Figure 12. Briefly, the
algorithm finds the best schedule of each input test path and schedules it according to the factorized form
of the characteristic function. For the highest concurrency (test time minimization) the scheduling of each
test path starts from the first possible time step, that is "1" for the input test paths transferring the test
patterns to the core under test and "the time step that the core output signature becomes ready" for the test
output paths. In this work, we have employed a simple As Soon As Possible scheduling method and consider
scheduling the cores in order. Each path is scheduled in a pipeline fashion as explained in section 4 while the
ASAP strategy is used to overlap the path execution times.
STRUCTURAL TEST (Input/Output Test Shortest
Construct and factorize the characteristic functions for input test paths;
Construct and factorize the characteristic function for output test paths;
for (all Core
schedule ASAP the input test paths (1 - i - Nin k )
based on the available bypass/connections;
test pathsg;
Ttest application ;
for (all Core
schedule ASAP the output test paths (1
based on the available bypass/connections;
Figure
12: Structural Test Algorithm
4.1 Possible Extensions
We have considered only unidirectional interconnects and buses in this work. In our graph modeling an n-bit
bidirectional bus is modeled as two p-bit and q-bit (p buses.
Although such extension of our graph model is straight forward, the shortest-path formulation needs to be
changed such that it can also decide for the best value of p and q in order to minimize the transfer time and
bit-match overhead. At the moment, we can take the bidirectional buses into account only if the values for p
and q are fixed (e.g. n=2). We don't address the general case in this paper but intend to consider it in near
future.
Not pursued in this paper, more sophisticated scheduling methods can be implemented for higher perfor-
mance. For example, we can relax the above two assumptions, i.e. ASAP and fixed ordering. Moreover, by
selecting a set of k disjoint (not necessarily the shortest) paths is selected to bypass test data for k different
pairs of points concurrently, the test time will be reduced even further. For systems using few busses (e.g.
VSI alliance bus wrapper design in which mainly three busses are responsible for connecting all cores in the
system) our test scheme shows lower efficiency but can be still used to find the best order of bus access to
optimize the test time.
4.2 Completing the Running Example
Figure
13 shows all the input and output test shortest paths to all four cores in our example. The time cost
is shown within the shaded circles attached to each core in the path.
Figures
14 (a) through (d) show the complete schedules for the shortest paths of Core 1 through Core 4,
respectively. We assumed that Core 1, 2, 3 and 4 require 9, 9, 11 and 25 cycles, respectively as test application
time (the shaded area in Figures 14). Note also that to clarify the role of signals introduced before in section 2
we listed all signals. A "0" in this table means that signal is not active (low logic level). All other entries ("1",
"2", "3" and "4") show an active signal (high logic level) is applied in the corresponding time step. We used
different symbols (1 through 4) to refer to the core under test. For example, entries "4" show the signals to be
active when Core 4 is under test. All empty locations in this table correspond to "\Theta" (don't care situation)
and can be used for logic optimization in implementation of this test controller. Note that some of bypass ij k
signals are defined "0" to guarantee
data conflict on port j .
Figure
15 shows the final schedule according to the scheduling method explained before. This scheduling is
po1
po1
po1
po1
po2
po2
po2
MISR
TPGR
TPGR
TPGR
TPGR
TPGR
TPGR
MISR
MISR
MISR
MISR
MISR
MISR
(a)
(b)
(c)
(d)
Infinity
Infinity
Infinity
Infinity
Infinity Infinity
Infinity
Infinity
Infinity
Infinity
Infinity
44
Infinity
Infinity
TPGR
Infinity 4Core 4
Figure
13: All the input/output shortest paths.
basically obtained by overlapping the four schedules of Figure 14 and some time adjustment when some port
or bypassing route is not available.
This schedule shows that in the structural test session the input data packets (test patterns) and the
output data packets (signatures) are somehow scrambled to overlap the activities and test data transfer in
the test session. The importance of core capability to bypass when it executes a test pattern is clear in this
figure. More specifically, when a core is under test, the bypass routes can transfer test data for other cores.
This kind of overlapping reduces the test time dramatically. Note also in Figure 15 bypass 12 1 , bypass 22 2 and
bypass 12 3 are omitted because the corresponding bypass routes are never needed. This removals will reduce
the hardware overhead of the bypass circuitry.
5 Experimental Results
In this section, we demonstrate our approach using a system made of four cores. The circuits have been
synthesized from high level descriptions using the SYNTEST synthesis system [HPCN92]. Logic level synthesis
is done using the ASIC Synthesizer from the COMPASS Design Automation suite of tools with a 0.8-micron
CMOS library [Comp93]. Fault coverage curves are found for the resulting logic level circuits using AT&T's
GENTEST fault simulator [Gent93]. The probability of aliasing within the MISRs is neglected, as are faults
within the TPGRs and other test circuitry.
Since standard core benchmarks have not been introduced yet we decided to design ourselves four example
circuits as cores, all with eight bit wide datapaths. The first core (Core 2) evaluates a third degree polynomial.
Our other examples implement three high level synthesis benchmarks: a differential equation solver (Core
TPGR
MISR
stop_4
stop_3
stop_2
stop_1
send_port_1
read_port_1
read_port_2
read_port_3
send_port_2
Cores Signal Name 17
Under Test
Time Steps
TPGR
MISR
stop_4
stop_3
stop_2
stop_1
send_port_1
read_port_1
read_port_2
read_port_3
send_port_2
Cores Signal Name 17
Under Test22
Time Steps
(a) Bypass schedule for Core 1 (b) Bypass schedule for Core 2
TPGR
MISR
stop_4
stop_3
stop_2
stop_1
send_port_1
read_port_1
read_port_2
read_port_3
send_port_2
Cores Signal Name 17
Under Test
Time Steps
TPGR
MISR
stop_4
stop_3
stop_2
stop_1
send_port_1
read_port_1
read_port_2
read_port_3
send_port_2
Cores Signal Name 17
Under
Time Steps
(c) Bypass schedule for Core 3 (d) Bypass schedule for Core 4
Figure
14: Bypass schedules for four cores.
TPGR
MISR
stop_4
stop_3
stop_2
stop_1
send_port_1
read_port_1
read_port_2
read_port_3
4 44
Cores Signal Name
Time Steps222
Figure
15: Complete bypass schedule (pipelined fashion) for all four cores.
Core Circuit Normal Transistor Total Faults Fault Test
Name Schedule Count Faults Detected Coverage Time
Poly 9 8684 899 804 89.43 932
Table
3: Statistics for four cores when tested separately.
Full Scan Test Structural Test
Parameter Value Parameter Value
Test
Overhead Output 4484 Bit Match 3548
Test Controller 1045 Test Controller 6062
Test
Per Pattern Core 2 101 Core 2 (Overlapped) 15
One Iteration 501 One Iteration 34
Fault Coverage 88.44% Fault Coverage 91.43%
Test Test
Statistics [Transistor] [Transistor]
Test Time 57363 Test Time 9180
Table
4: Comparison between scan and our proposed structural test.
and the Facet example (Core 1) [GDWL92], and a fifth order elliptical filter (Core 4) [KuWK85]. Note that
each core consists of the datapath and controller both are made testable using BIST applied by SYNTEST
in integrated fashion [HPCN92][NoCP97]. Also, to fit the cores in the port specification as Figure 8(a) shows
we grouped or replicated the actual inputs/outputs of the circuit.
Table
3 summarizes the transistor count, fault coverage and the test time for each core when tested as a
non-embedded circuit separately. Note that the test time has been expressed in terms of "clock cycle". We
execute the corresponding schedule of the core until the fault coverage curve is saturated. For example, for
Diffeq core, the schedule consists of 11 cycles and the fault simulation process requires 70 iterations (each for
a set of random patterns) until the curve is saturated.
We then put these fours together and obtained the fault simulation of the whole system based on "scan"
and our "structural" test. Table 4 summarizes the fault coverage statistics for these two methods.
In scan testing, the order of cores in the scan-in and scan-out chain affects the test time. In our experiments
we assumed a single scan-in chain with the order 4. In the single scan-out
chain the order is assumed to be Core 1. The lower fault coverage of scan is
expected because that method can not capture the interconnect faults. Our proposed structural test approach
requires slightly more test circuitry (about 5.4%) than full scan mainly due to its test controller. Overall, the
structural test overhead is 23.18% of the system cost in this example.
Note that the test methods for individual cores (datapath and controller) achieve fault coverage in the range
of of 87.97% to 92.81% (see Table 3). Thus, regardless of method employed for system testing achieving very
high fault coverage in this system, without redesigning the cores, would not be possible. Using our structural
Multiple-chain Structural Test
Test Controller [Trans.] 4810 6062
Test Time [Cycle] 15390 9180
Tester Pin
Table
5: Comparison between multiple-chain scan and our method.
test approach, the fault coverage has increased by almost 3% mainly due to detecting the interconnect faults.
The real advantage of our structural method is the test time since by bypass scheduling we overlap transferring
test data between test points using the existing interconnections. This resulted in almost 84% test time
reduction compared to scan.
Note that in Table 4 we compared our method with single-chain scan. Using multiple-chain scan, obviously,
will improve the test time with the expense of more overhead for control and wiring. To be more specific,
the result of using the maximum number of scan chains, four in this example - one chain for each core - is
summarized in Table 5. The basic idea of using multiple chains is to apply scan to different partitions in
parallel. Independent multiple chains require dedicated control which increases the test control overhead by a
factor of almost 4. The overall test overhead circuitry remain quite high. More importantly, independent chains
require separate scan-in, scan-out, core-select and test=normal which increases the tester pins to 4
for four chains compared to one (test=normal) in our method. Test time reduction depends on the bottleneck
core; the one with the largest test time which itself depends on inputs/outputs bit width, core execution time
and number of test patterns required. In fact, in our example the bottleneck core is Core 1 with 48 bits
inputs/outputs and requires about 270 test patterns. This resulted in test time of 15390 cycles which is still
67.5% higher than our proposed structural core testing method.
6
Summary
We have proposed a test methodology for testing a core-based system in its entirety. A "bypass" mode circuitry
is added to each core and is used to transfer test data from a source (data generation point) to a sink (data
consumption point) through the existing interconnections. The system is modeled as a directed weighted
graph in which the accessibility (of the core input and output ports) is solved as a shortest path problem.
The test data distribution and collection of signatures are scrambled in a pipelined fashion to minimize test
time. The experimental results are promising in terms of test time and quality testing of the interconnections
among cores.
--R
"A Knowledge Based System for Designing Testable VLSI Chips,"
"Finding Defects with Fault Models,"
Testability Concepts for Digital ICs
"A Unifying Methodology for Intellectual Property and Custom Logic Testing,"
Graph Theory With Applications
"Test methodology for embedded cores which protects intellectual property,"
"Testability Analysis and Insertion for RTL Circuits Based on Pseudorandom BIST,"
"Testing Systems on a Chip,"
"User Manuals for COMPASS VLSI V8R4.4,"
Synthesis and optimization of digital circuits
"User Manuals for GENTEST S 2.0,"
Introduction to Chip and System Design
"SYNTEST: An Environment for System-Level Design for Test,"
"Direct Access Test Scheme - Design of Block and Core Cells for Embedded ASICs,"
"TestSockets: A Framework for System-On-Chip Design,"
VLSI and Modern Signal Processing
"A Structured and Scalable Mechanism for Test Access to Embedded Reusable Cores,"
"A Structured and Scalable Mechanism for Test Access to Embedded Reusable Cores,"
"A Scheme for Integrated Controller-Datapath Fault Testing,"
"Test Synthesis in the Behavioral Domain,"
"Test Responses Compaction in Accumulators with Rotate Carry Adders,"
Application Specific Integrated Circuits
High Performance Computer Architecture
"Testing Embedded Cores Using Partial Isolation Rings,"
"0.8-Micron CMOS VSC450 Portable Library,"
Version 1.0
--TR
--CTR
Tomokazu Yoneda , Hideo Fujiwara, Design for Consecutive Testability of System-on-a-Chip with Built-In Self Testable Cores, Journal of Electronic Testing: Theory and Applications, v.18 n.4-5, p.487-501, August-October 2002
Tomokazu Yoneda , Masahiro Imanishi , Hideo Fujiwara, Interactive presentation: An SoC test scheduling algorithm using reconfigurable union wrappers, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
Tomokazu Yoneda , Kimihiko Masuda , Hideo Fujiwara, Power-constrained test scheduling for multi-clock domain SoCs, Proceedings of the conference on Design, automation and test in Europe: Proceedings, March 06-10, 2006, Munich, Germany
Mohammad Hosseinabady , Abbas Banaiyan , Mahdi Nazm Bojnordi , Zainalabedin Navabi, A concurrent testing method for NoC switches, Proceedings of the conference on Design, automation and test in Europe: Proceedings, March 06-10, 2006, Munich, Germany
Rainer Dorsch , Hans-Joachim Wunderlich, Reusing Scan Chains for Test Pattern Decompression, Journal of Electronic Testing: Theory and Applications, v.18 n.2, p.231-240, April 2002
rika Cota , Luigi Carro , Marcelo Lubaszewski, Reusing an on-chip network for the test of core-based systems, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.9 n.4, p.471-499, October 2004
rika Cota , Luigi Carro , Marcelo Lubaszewski , Alex Orailolu, Searching for Global Test Costs Optimization in Core-Based Systems, Journal of Electronic Testing: Theory and Applications, v.20 n.4, p.357-373, August 2004
Qiang Xu , Nicola Nicolici, Modular and rapid testing of SOCs with unwrapped logic blocks, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.13 n.11, p.1275-1285, November 2005 | shortest path;structural testing;pipelined test schedule;at-speed testing;bypass mode |
608920 | Test Wrapper and Test Access Mechanism Co-Optimization for System-on-Chip. | Test access mechanisms (TAMs) and test wrappers are integral parts of a system-on-chip (SOC) test architecture. Prior research has concentrated on only one aspect of the TAM/wrapper design problem at a time, i.e., either optimizing the TAMs for a set of pre-designed wrappers, or optimizing the wrapper for a given TAM width. In this paper, we address a more general problem, that of carrying out TAM design and wrapper optimization in conjunction. We present an efficient algorithm to construct wrappers that reduce the testing time for cores. Our wrapper design algorithm improves on earlier approaches by also reducing the TAM width required to achieve these lower testing times. We present new mathematical models for TAM optimization that use the core testing time values calculated by our wrapper design algorithm. We further present a new enumerative method for TAM optimization that reduces execution time significantly when the number of TAMs being designed is small. Experimental results are presented for an academic SOC as well as an industrial SOC. | Introduction
System-on-chip (SOC) integrated circuits composed of proces-
sors, memories, and peripheral interface devices in the form of embedded
cores, are now commonplace. Nevertheless, there remain
several roadblocks to rapid and efficient system integration. Test development
is now seen as a major bottleneck in SOC design, and
test challenges are a major contributor to the widening gap between
design and manufacturing capability [23].
The 1999 International Technology Roadmap for Semiconductors
[12] clearly identifies test access for SOC cores as one of the
challenges for the near future. Test access mechanisms (TAMs)
and test wrappers have been proposed as important components of
an SOC test access architecture [23]. TAMs deliver pre-computed
test sequences to cores on the SOC, while test wrappers translate
these test sequences into patterns that can be applied directly to the
cores.
Test wrapper and TAM design is of critical importance in SOC
system integration since it directly impacts the vector memory depth
required on the ATE, as well as testing time, and thereby affects
test cost. A TAM and wrapper design that minimizes the idle time
spent by TAMs and wrappers during test directly reduces the number
of don't-care bits in vectors stored on the tester, thereby reducing
vector memory depth. The design of efficient test access architectures
has become an important focus of research in core test integration
[1, 3, 4, 5, 6, 14, 17, 19]. This is especially timely and relevant,
This research was supported in part by the National Science Foundation
under grant number CCR-9875324.
since the proposed IEEE P1500 standard provides a lot of freedom in
optimizing its standardized, but scalable wrapper, and leaves TAM
optimization entirely to the system integrator [10, 18].
The general problem of SOC test integration includes the design
of TAM architectures, optimization of the core wrappers, and
test scheduling. The goal is to minimize the testing time, area
costs, and power consumption during testing. The wrapper/TAM
co-optimization problem that we address in this paper is as follows.
Given the test set parameters for the cores on the SOC, as well as
the total TAM width, determine an optimal number of TAMs for the
SOC, an optimal partition of the total TAM width among the TAMs,
an optimal assignment of cores to each TAM, and an optimal wrapper
design for each core, such that the overall system testing time is
minimized. In order to solve this problem, we examine a progression
of three incremental problems structured in order of increasing com-
plexity, such that they serve as stepping-stones to the more general
problem of wrapper/TAM design. The first problem PW is related
to test wrapper design. The next two problems PAW and PPAW are
related to wrapper/TAM co-optimization.
a wrapper for a given core, such that (a) the core
testing time is minimized, and (b) the TAM width required for the
core is minimized.
2. PAW : Determine (i) an assignment of cores to TAMs of given
widths and (ii) a wrapper design for each core, such that SOC testing
time is minimized. (Item (ii) equals PW .)
3. a partition of the total TAM width among
the given number of TAMs, (ii) an assignment of cores to the TAMs,
and (iii) a wrapper design for each core, such that SOC testing time
is minimized. (Items (ii) and (iii) together equal PAW .)
These three problems lead up to PNPAW , the more general problem
of wrapper/TAM co-optimization described as follows.
the number of TAMs for the SOC, (ii) a partition
of the total TAM width among the TAMs, (iii) an assignment
of cores to TAMs, and (iv) a wrapper design for each core, such that
testing time is minimized. (Items (ii), (iii) and (iv) together
equal
In the remainder of this paper, we formally define and analyze these
problems, and propose solutions for each.
In this paper, we assume the "test bus" model for TAMs. We
assume that the TAMs on the SOC operate independently of each
however, the cores on a single TAM are tested sequentially.
This can be implemented either by (i) multiplexing all the cores assigned
to a TAM, or (ii) by testing one of the cores on the TAM,
while the other cores on the TAM are bypassed. Furthermore, in this
paper, we are addressing the problem of core test only; hence, we
do not discuss issues related to test wrapper bypass and interconnect
test.
The organization of this paper is as follows. In Section 2, we
discuss prior work in the area of TAM and test wrapper design. In
Section 3, we present two SOCs that we use as running examples
throughout the paper. In Section 4, we address Problem PW . In
Section 5, we develop improved integer linear programming (ILP)
models for optimizing core assignment to TAMs (Problem PAW ).
In Section 6, we present new ILP models for TAM width partitioning
(Problem PPAW ). In Section 7, we present an enumerative method
that can often reduce the execution time required to solve PPAW
when the number of TAMs is small. Finally, in Section 8, we examine
PNPAW , the general problem of wrapper/TAM co-optimization.
Section 9 concludes the paper.
Prior work
Test wrappers provide a variety of operation modes, including
normal operation, core test, interconnect test, and (optional) by-pass
[16]. In addition, test wrappers need to be able to perform test
width adaptation if the width of the TAM is not equal to the number
of core terminals. The IEEE P1500 standard addresses the design of
a flexible, scalable wrapper to allow modular testing [10, 18]. This
wrapper is flexible and can be optimized to suit the type of TAM and
test requirements for the core.
A "test collar" was proposed in [21] to be used as a test wrapper
for cores. However, test width adaptation and interconnect test were
not addressed. The issue of efficient de-serialization of test data by
the use of balanced wrapper scan chains was discussed in [6]. Balanced
wrapper scan chains, consisting of chains of core I/Os and
internal scan chains, are desirable because they minimize the time
required to scan in test patterns from the TAM. However, no mention
was made of the method to be used to arrive at a balanced assignment
of core I/Os and internal scan chains to TAM lines. The
TESTSHELL proposed in [16] has provisions for the IEEE P1500
required modes of operation. Furthermore, heuristics for designing
balanced wrapper scan chains, based on approximation algorithms
for the well-known Bin Design problem [7], were presented in [17].
However the issue of reducing the TAM width required for a wrapper
was not addressed.
A number of TAM designs have been proposed in the literature.
These include multiplexed access [11], partial isolation rings [20],
core transparency [8], dedicated test bus [21], reuse of the existing
system bus [9], and a scalable bus-based architecture called
RAIL [16]. Bus-based TAMs, being flexible and scalable, appear
to be the most promising. However, their design has largely been
ad hoc and previous methods have seldom addressed the problem
of minimizing testing time under TAM width constraints. While [1]
presents several novel TAM architectures (i.e., multiplexing, daisy
chaining and distribution), it does not directly address the problem
of optimal sizing of TAMs in the SOC. In particular, only internal
scan chains are considered in [1], while wrappers and functional
I/Os are ignored. Moreover, the lengths of the internal scan chains
are not considered fixed, and therefore [1] does not directly address
the problem of designing test architectures for hard cores.
More recently, integrated TAM design and test scheduling has
been attempted in [14, 19]. However, in [14, 19], the problem of
optimizing test bus widths and arbitrating contention among cores
for test width was not addressed. In [19], the cost estimation for
TAMs was based on the number of bridges and multiplexers used;
the number of TAM wires was not taken into consideration. Fur-
thermore, in [14] the impact of TAM widths on testing time was not
included in the cost function. The relationship between testing time
and TAM widths using ILP was examined in [3, 5], and TAM width
optimization under power and routing constraints was studied in [4].
However, the problem of effective test width adaptation in the wrapper
was not addressed. This led to an overestimation of testing time
and TAM width. Improved wrapper designs and new ILP models for
TAM design are therefore necessary.
In this paper, we present a new wrapper/TAM co-optimization
methodology that overcomes the limitations of previous TAM design
approaches that have addressed TAM optimization and wrapper
design as independent problems. The new wrapper design algorithm
that we present improves upon previous approaches by minimizing
the core testing time, as well as reducing the TAM width required for
the core. We propose an approach based on ILP to solve the problems
of determining an optimal partition of the total TAM width
and determining an optimal core assignment to the TAMs. We also
address a new problem, that of determining the optimal number of
TAMs for an SOC. This problem gains importance with increasing
SOC size. This paper, to the best of our knowledge, is the first in
which a wrapper/TAM co-optimization methodology has been applied
to an industrial SOC.
3 Example SOCs
In order to illustrate the proposed wrapper/TAM co-optimization
methods presented in this paper, and to demonstrate their effective-
ness, we use two representative SOCs as running examples through-out
this paper. The first one is an academic SOC named d695 (de-
scribed as system S2 in [5]), and the second one is an industrial SOC
from Philips, named p93791. The number (e.g., 93791) in each SOC
name is a measure of its test complexity. This number is calculated
by considering the numbers of functional inputs n i , functional outputs
chains sc i , internal scan chain
lengths l i;1 ; l test patterns p i for each Core i, as
well as the total number of cores N in the SOC, all of which contribute
to the complexity of the wrapper/TAM co-optimization prob-
lem. We calculate the SOC test complexity number using the formula
l i;r ). The letters "d"
and "p" in d695 and p93791 refer to Duke University and Philips,
respectively.
SOC d695 consists of two combinational ISCAS'85 and eight sequential
benchmark circuits. Table 1 presents the test data
for each core in d695. We assume that the ISCAS'89 circuits contain
well-balanced internal scan chains. The proposed wrapper/TAM
co-optimization methodology is also applicable to SOCs containing
non-scanned sequential cores, since these cores can be treated as
combinational (having zero-length internal scan chains) for the purpose
of testing time calculation. SOC p93791 contains cores.
Of these, are memory cores embedded within hierarchical logic
cores.
Table
2 presents the data for the 14 logic cores and embedded
memories in SOC p93791. We do not describe each core
in p93791 individually due to insufficient space. Test data for each
individual core in p93791 is presented in [13].
The experimental results presented in this paper were obtained
using a Sun Ultra 80 with a 450 MHz processor and 4096 MB memory
Number of Scan chain
Circuit Test Functional Scan lengths
patterns I/Os chains Min Max
Table
1. Test data for the cores in d695 [5].
Number range Scan chain
Circuit Test Functional Scan lengths
patterns I/Os chains Min Max
Logic
cores 11-6127 109-813 11-46 1 521
Memory
cores 42-3085 21-396 -
Table
2. Ranges in test data for the 32 cores in p93791.
4 Test wrapper design
A standardized, but scalable test wrapper is an integral part of
the IEEE P1500 working group proposal [10]. A test wrapper is a
layer of DfT logic that connects a TAM to a core for the purpose of
test [23]. Test wrappers have four main modes of operation. These
are (i) Normal operation, (ii) Intest: core-internal testing, (iii) Extest:
core-external testing, i.e., interconnect test, and (iv) Bypass mode.
Wrappers may need to perform test width adaptation when the TAM
width is not equal to the number of core terminals. This will often
be required in practice, since large cores typically have hundreds of
core terminals, while the total TAM width is limited by the number
of SOC pins. In this paper, we address the problem of TAM design
for Intest, and therefore we do not discuss issues related to Bypass
and Extest.
The problem of designing an effective width adaptation mechanism
for Intest can be broken down into three problems [17]: (i)
partitioning the set of wrapper scan chain elements (internal scan
chains and wrapper cells) into several wrapper scan chains, which
are equal in number to the number of TAM lines, (ii) ordering the
scan elements on each wrapper chain, and (iii) providing optional
bypass paths across the core. The problems of ordering scan elements
on wrapper scan chains and providing bypass paths were
shown to be simple in [17], while that of partitioning wrapper scan
chain elements was shown to be NP-hard. Therefore, in this sec-
tion, we address only the problem of effectively partitioning wrapper
scan chain elements into wrapper scan chains.
Recent research on wrapper design has stressed the need for balanced
wrapper scan chains [6, 17]. Balanced wrapper scan chains
are those that are as equal in length to each other as possible. Balanced
wrapper scan chains are important because the number of
clock cycles to scan in (out) a test pattern to (from) a core is a function
of the length of the longest wrapper scan-in (scan-out) chain.
be the length of the longest wrapper scan-in (scan-out)
chain for a core. The time required to apply the entire test set to the
core is then given by
where p is the number of test patterns. This time T decreases as
both s i and so are reduced, i.e., as the wrapper scan-in (and scan-
out) chains become more equal in length.
Figure
1 illustrates the difference between balanced and unbalanced
wrapper scan chains; Bypass and Extest mechanisms are not
shown. In Figure 1(a), wrapper scan chain 1 consists of two input
cells and two output cells, while wrapper scan chain 2 consists of
three internal scan chains that contain 14 flip-flops in total. This
results in unbalanced wrapper scan-in/out chains and a scan-in and
scan-out time per test pattern of 14 clock cycles each. On the other
hand, with the same elements and TAM width, the wrapper scan
chains in Figure 1(b) are balanced. The scan-in and scan-out time
per test pattern is now 8 clock cycles.
Wrapper
scan chain 1
Wrapper
scan chain 2
Wrapper
scan chain 2
Wrapper
Wrapper
scan chain 1
Wrapper
(a) (b)
Figure
1. Wrapper chains: (a) unbalanced, (b) balanced.
The problem of partitioning wrapper scan chain elements into
balanced wrapper scan chains was shown to be equivalent to the
well-known Multiprocessor Scheduling and Bin Design problems
in [17]. In this paper, the authors presented two heuristic algorithms
for the Bin Design problem to solve the wrapper scan chain element
partitioning problem. Given k TAM lines and sc internal scan
chains, the authors assigned the scan elements to m wrapper scan
chains, such that maxfs i ; sog was minimized. This approach is effective
if the goal is to minimize only maxfs sog. However, we are
addressing the wrapper design problem as part of the more general
problem of wrapper/TAM co-optimization, and therefore we would
also like to minimize the number of wrapper chains created. This
can be explained as follows. Consider a core that has four internal
scan chains of lengths 32, 8, 8, and 8, respectively, 4 functional
inputs, and 2 functional outputs. Let the number of TAM lines provided
be 4. The algorithms in [17] will partition the scan elements
among four wrapper scan chains as shown in Figure 2(a), giving
However, the scan elements may also be assigned
to only 2 wrapper scan chains as shown in Figure 2(b), which
also gives maxfs 32. The second assignment, however, is
clearly more efficient in terms of TAM width utilization, and therefore
would be more useful for a wrapper/TAM co-optimization strategy
Consider Core 6, the largest logic core from p93791. Core 6
has 417 functional inputs, 324 functional outputs, 72 bidirectional
I/Os, and 46 internal scan chains of lengths: 7 scan chains 500
bits, scan chains 520 bits, and 9 scan chains 521 bits, re-
spectively. The Combine algorithm [17] was used to create wrapper
configurations for Core 6, for values of k between 1 and 64 bits.
Since the functional inputs in Core 6 outnumber the functional out-
puts, . The value of s i obtained for each value
of k is illustrated in Figure 3. From the graph, we observe that as
increases, s i decreases in a series of distinct steps. This is be-
Scan chain -
Scan chain - 8 FF
Scan chain - 8 FF
I O
I
I Scan chain - 8 FF O
I (a)
Scan chain -
O
I I I I Scan chain - 8 FF Scan chain - 8 FF Scan chain - 8 FF O
(b)
Figure
2. Wrapper design example using (a) four wrapper
scan chains, and (b) two wrapper scan chains.
cause as k increases, the core internal scan chains are redistributed
among a larger number of wrapper scan-in chains; thus s i decreases
only when the increase in k is sufficient to remove an internal scan
chain from the longest wrapper scan-in chain. For example, when
the internal scan chains in Core 6 are distributed among 24 wrapper
scan-in chains, s long. The value of s i remains at 1040
until k reaches 39, when s i drops to 1020. Hence, for 24 k 38,
only 24 wrapper scan-in chains need be designed. Our wrapper design
strategy is therefore to (i) minimize testing time by minimizing
identify the maximum number k 0 of wrapper
scan chains that actually need to be created to minimize testing time,
when k TAM lines are provided to the wrapper. The set of values of
corresponding to the values of 1 k 1 is known as the set of
pareto-optimal points for the graph.
TAM width (bits)
28
Longest
wrapper
scan-in
chain
Figure
3. Decrease in s i with increasing k for Core 6.
PW , the two-priority wrapper optimization problem that this section
addresses can now be formally stated as follows.
Given a core with n functional inputs, m functional out-
puts, sc internal scan chains of lengths l 1
tively, and TAM width k, assign the n +m+ sc wrapper scan
chain elements to k 0 k wrapper scan chains such that (i)
is the length of the
longest wrapper scan-in (scan-out) chain, and (ii) k 0 is minimum
subject to priority (i).
Priority (ii) of PW is based on the earlier observation that
can be minimized even when the number of wrapper
scan chains designed is less than k. This reduces the width of the
TAM required to connect to the wrapper. Problem PW is therefore
analogous to the problem of Bin Design (minimizing the size
of the bins), with the secondary priority of Bin Packing (minimiz-
ing the number of bins). If the value of kmax in Problem PW is
always fixed at k, then Problem PW reduces to the Partitioning of
Scan Chains (PSC) Problem [17], and is therefore clearly NP-hard,
since Problem PSC was shown to be NP-hard in [17].
We have developed an approximation algorithm based on the
Best Fit Decreasing (BFD) heuristic [7] to solve PW efficiently. The
algorithm has three main parts, similar to [17]: (i) partition the internal
scan chains among a minimal number of wrapper scan chains
to minimize the longest wrapper scan chain length, (ii) assign the
functional inputs to the wrapper scan chains created in part (i), and
(iii) assign the functional outputs to the wrapper scan chains created
in part (i). To solve part (i), the internal scan chains are sorted
in descending order. Each internal scan chain is then successively
assigned to the wrapper scan chain, whose length after this assignment
is closest to, but not exceeding the length of the current longest
wrapper scan chain. Intuitively, each internal scan chain is assigned
to the wrapper scan chain in which it achieves the best fit. If there is
no such wrapper scan chain available, then the internal scan chain is
assigned to the current shortest wrapper scan chain. Next the process
is repeated for part (ii) and part (iii), considering the functional inputs
and outputs as internal scan chains of length 1. The pseudocode
for our algorithm Design wrapper is illustrated in Figure 4.
Procedure Design wrapper
Part (i)
1. Sort the internal scan chains in descending order of length
2. For each internal scan chain l
3. Find wrapper scan chain Smax with current maximum length
4. Find wrapper scan chain Smin with current minimum length
5. Assign l to wrapper scan chain S, such that
6. If there is no such wrapper scan chain S then
7. Assign l to Smin
Part (ii)
8. Add the functional inputs to the wrapper chains created in part (i)
Part (iii)
9. Add the functional outputs to the wrapper chains created in part (i)
Figure
4. Algorithm for wrapper design that minimizes testing
time and number of wrapper scan chains.
We base our algorithm on the BFD heuristic mainly because BFD
utilizes a more sophisticated partitioning rule than First Fit Decreasing
since each scan element is assigned to the wrapper scan
chain in which it achieves the best fit [7]. FFD was used as a subroutine
to the wrapper design algorithm in [17]. In our algorithm,
a new wrapper scan chain is created only when it is not possible
to fit an internal scan chain into one of the existing wrapper scan
chains without exceeding the length of the current longest wrapper
scan chain. Thus, while the algorithms presented in [17] always
use k wrapper scan chains, Design wrapper uses as few wrapper
scan chains as possible, without compromising test application
time. The worst-case complexity of the Design wrapper algorithm
is O(sc log sc + sc k), where sc is the number of internal scan
chains and k is the limit on the number of wrapper scan chains.
From
Figure
3, we further observe that as k is increased beyond
47, there is no further decrease in testing time, since the longest
internal scan chain of the core has been assigned to a dedicated
wrapper scan chain, and there is no other wrapper scan chain longer
than the longest internal scan chain. We next derive an expression
for this maximum value of TAM width kmax required to minimize
testing time for a core.
Theorem 1 If a core has n functional inputs, m functional outputs,
and sc internal scan chains of lengths l 1 respectively, an
upper bound kmax on the TAM width required to minimize testing
time is given by
Proof: The test application time for a core is given by
is the length
of the longest wrapper scan-in (scan-out) chain, and p is the number
of test patterns in the test set. A lower bound on T , for any
TAM width k, is therefore given by (1
sogg. The lowest value that maxfs i ; sog and
can attain, is given by the length of the core's longest
internal scan chain g. Therefore,
g. Let the upper bound on k at which
minfTg is reached be denoted as kmax . At this value of k, the number
of flip-flops assigned to each wrapper scan chain (either scan-in
or scan-out, whichever has more flip-flops) is at most max i g.
Therefore kmax is the smallest integer, such that kmax
is at least the sum of all the flip-flops on the wrapper scan chains,
l i . Thus, kmax is the
smallest integer, such that kmax maxfn;mg+
. Therefore,
l i
. 2
The value of kmax for each core can further be used to determine
an upper bound on the TAM width for any TAM on the SOC. In Section
6, we show how kmax can be used to bound the TAM widths,
when obtaining an optimal partition of total test width among TAMs
on the SOC.
Table
3 presents results on the savings in TAM width obtained
using Design wrapper for Core 6. For larger values of k, the number
of TAM lines actually used is far less than the number of available
TAM lines; thus, with respect to TAM width utilization, Design
wrapper is considerably more efficient than the wrapper design
algorithms proposed in [17].
Available TAM Longest
TAM width wrapper
width utilized scan chain
9 9 3081
Available TAM Longest
TAM width wrapper
width utilized scan chain
22 22 1521
43-45 43 1000
Table
3. Design wrapper results for Core 6.
In the next section, we address PAW , the second problem of our
TAM/wrapper co-optimization framework-determining an assignment
of cores to TAMs of given widths and optimizing the wrapper
design for each core.
5 Optimal core assignment to TAMs
In this paper, we assume the "test bus" model for TAM design.
We assume that each of the B TAMs on the SOC are independent;
however, the cores on each TAM are tested in sequential order. This
can be implemented either by (i) multiplexing all the cores assigned
to a TAM as in Figure 5(a), or (ii) by testing one of the cores on the
TAM, while the other cores on the TAM are in Bypass mode as in
Figure
5(b). Furthermore, the core bypass may either be an internal
bypass within the wrapper or an external bypass. This paper does
not directly the address the design of hierarchical TAMs. The SOC
hierarchy is flattened for the purpose of TAM design and hierarchical
cores are considered as being at the same level in test mode.
Multiplexed cores
Core A
Core A
Internal bypass
External bypass
(a) (b)
Figure
5. Test bus model of TAM design: (a) multiplexed
cores, (b) cores with bypass on a test bus.
The problem that we examine in this section, that of minimizing
the system testing time by assigning cores to TAMs when TAM
widths are known, can be stated as follows.
Given N cores and B TAMs of test widths
determine an assignment of cores to the
TAMs and a wrapper design for each core, such that the testing
time is minimized.
This problem can be shown to be NP-hard using the techniques
presented in [5]. However, for realistic SOCs the sizes of the problem
instances were found to be small and could be solved exactly
using an ILP formulation in execution times less than a second.
To model this problem, consider an SOC consisting of N cores
and B TAMs of widths w1 ; assigned
to TAM j, let the time taken to test Core i be given by T i (w j )
clock cycles. The testing time T i (w j ) is calculated as T i (w
is the number of test
patterns for Core i and s i (so ) is the length of the longest wrapper
scan-in (scan-out) chain obtained from Design wrapper. We introduce
binary variables x ij (where 1 i N and 1 j B),
which are used to determine the assignment of cores to TAMs in the
SOC. Let x ij be a 0-1 variable defined as follows:
is assigned to TAM j
The time needed to test all cores on TAM j is given
by
Since all the TAMs can be used
simultaneously for testing, the system testing time equals
A mathematical programming model for this problem can be formulated
as follows.
Objective: Min.
subject to
every core is connected to exactly
one TAM.
Before we describe how a solution to PAW can be obtained, we
briefly describe ILP, and then present the ILP formulation based on
the above mathematical programming model to solve PAW . The
goal of ILP is to minimize a linear objective function on a set of
integer variables, while satisfying a set of linear constraints [22]. A
typical ILP model can described as follows:
minimize: Ax
subject to: Bx C, such that x 0,
where A is an objective vector, B is a constraint matrix, C is a
column vector of constants, and x is a vector of integer variables.
Efficient ILP solvers are now readily available, both commercially
and in the public domain [2].
The minmax objective function of the mathematical programming
model for PAW can be easily linearized to obtain the following
ILP model.
Objective: Minimize T , subject to
1. T
is the maximum
testing time on any TAM
2.
every core is assigned to exactly
one TAM
We solved this simple ILP model to determine optimal assignments
of cores to TAMs for the SOCs introduced in Section 3. The
number of variables and constraints for this model (a measure of the
complexity of the problem) is given by NB, and N
respectively. The user time was less than a second in all cases. The
optimal assignments of cores to TAMs of given widths for SOC d695
are shown in Table 4. Note that the testing times shown are optimal
only for the given TAM widths; lower testing times can be achieved
if an optimal TAM width partition is chosen. For example, Table 5
in Section 6 shows that a testing time lower than 29451 cycles can be
achieved using two TAMs, if an optimal TAM width partition is cho-
sen. In Sections 6 and 7, we will address the problem of determining
optimal width partitions.
TAM TAM Testing time
widths assignment (clock cycles)
Table
4. Core assignment to TAMs for SOC d695.
Lower bounds on system testing time. For an SOC with N
cores and B TAMs of widths w1 ; respectively, a lower
bound on the total testing time T is given by
g. The testing time for Core i depends on the width
of the test bus to which it is assigned. Clearly, the testing time for
Core i is at least min j fT i (w j )g. Since the overall system testing
time is constrained by the core that has the largest test time, therefore
g. Intuitively, this value
is the time needed to test the core that has the largest testing time
when assigned to the widest TAM. For SOC d695 with two TAMs
of bits and 16 bits, respectively, the lower bound on the testing
time is 6215 cycles. This corresponds to the testing time needed for
Core 5 if it is assigned to TAM 1.
A lower bound on system testing time that does not depend on the
given TAM widths can further be determined. This bound is related
to the length of the longest internal scan chain of each core. The
lower bound becomes tighter as we increase the number of TAMs.
From Theorem 1, we know that for a Core i, where Core i has sc i
internal scan chains of lengths l i;1 ; l respectively, and
the test set for Core i has p i test patterns, a lower bound on the testing
time is given by (1+maxrfl i;r g) p i +maxrfl i;r g. Therefore,
for an SOC with N cores, a lower bound on the system testing time
is given by gg. Intuitively,
this means that the system testing time is lower bounded by the time
required to test the core with the largest testing time.
6 Optimal partitioning of TAM width
In this section, we address PPAW : the problem of determining
(i) optimal widths of TAMs, and (ii) optimal assignments of cores to
TAMs, in conjunction with wrapper design. This is a generalization
of the core assignment problem PAW . We describe how testing
times computed using the Design wrapper algorithm in Section 4
are used to design the TAM architecture.
We assume that the total system TAM width can be at most W .
From Theorem 1 in Section 4, we know that the width of each TAM
need not exceed the maximum value of upper bound kmax for any
core on the SOC. We denote this upper bound on the width of an
individual TAM as wmax . For TAMs wider than wmax , there is
no further decrease in testing time. Problem PPAW of minimizing
testing time by optimal allocation of width among the TAMs can
now be stated as follows.
Given an SOC having N cores and B TAMs of total
width W , determine a partition of W among the B TAMs, an
assignment of cores to TAMs, and a wrapper design for each
core, such that the total testing time is minimized.
This problem can be shown to be NP-hard using the techniques presented
in [5]. However, for many realistic SOCs, including p93791,
the problem instance size is reasonable and can be solved exactly
using an ILP formulation. This is also because the complexity of
our solutions is related to the number of cores on the SOC and the
numbers of their I/O ports and scan chains, and not the number of
transistors or nets on the chip. A mathematical programming model
for PPAW is shown below.
Objective: Min.
subject to
1.
every core is connected to
exactly one TAM
2.
i.e., the sum of all TAM widths is
3. w j wmax , 1 j B, i.e., each TAM is at most wmax bits
wide
The objective function and constraints of this mathematical programming
model must now be linearized in order to express them in
the form of an ILP model that can be solved by an ILP solver. We
first express T i (w j ) as a sum: T i (w
adding new binary indicator variables - jk (where 1 j B; 1
to the mathematical programming model, such that:
1, if TAM j is k bits wide
0, otherwise
In addition, the following constraints are included in the model:
1.
a TAM can have values of width
between 1 and wmax
2.
a TAM can have only one width
Intuitively, for every TAM j there is exactly one value of k for which
therefore, the new indicator variables determine the
width w j of each TAM. The objective function now becomes
g. The testing time T i (k) for
various values of TAM width k can be efficiently calculated using
the Design Wrapper algorithm as shown in Section 4, and stored in
the form of a look-up table for reference by the ILP solver.
Finally, the non-linear term - jk x ij in the objective function can
be linearized by replacing it with the variable y ijk and the following
two constraints:
2.
This is explained as follows. Consider first the case when x
From Constraints 1 and 2, we have y ijk +1 - jk and 2y ijk - jk ;
since - jk 1, therefore, y ijk must equal
The new variables and constraints yield the following ILP formulation
Objective: Minimize
subject to
2.
3.
4.
5.
6.
The number of variables and constraints for these ILP models
is given by B (3N
respectively. We
solved the ILP model for PPAW for several values of W and B.
Table
5 and Figure 6 present the values of testing time for SOC d695
obtained with two TAMs. The total TAM width partition among the
two TAMs is shown and we also compare the testing times obtained
with the testing times obtained in [5]. The testing time using the
new wrapper design is at least an order of magnitude less than the
time required in [5] for all cases. This was to be expected since an
inefficient de-serialization model was used in [5]. The reductions in
testing time diminish with increasing W . A pragmatic choice of W
for the system might therefore be the point where the system testing
time begins to level off. In Figure 6, this occurs at
Total Results in [5] Current wrapper/TAM co-optimizaion
TAM Partition Testing Partition Testing Execution
28
36 4+32 2174501 16+20 22246 11.0
44 12+32 2123437 10+34 20094 13.0
48
reported in [5]
Table
5. Testing time for d695 for 2.
Total TAM width (bits)
28 6420406080100120140160Testing
time
(1000
clock
cycles)
Figure
6. Testing time for d695 for 2.
Table
6 presents the values of testing time obtained with three
TAMs. The testing times for are lower than the values obtained
in general. However, for W 14, the testing time
more than that for included in Table 6).
This is because for small values of W; a larger number of TAMs
makes the widths of individual TAMs very small. Once again, the
testing time begins to level off, this time at hence this is a
good choice for trading off TAM width with testing time.
Total TAM width Execution
TAM partition Core Testing time
28 4+8+16 (2,1,2,1,3,2,3,1,1,3) 24021 52.3
36 4+16+16 (2,2,2,1,2,3,2,1,2,3) 19573 85.0
44 4+18+22 (1,1,1,1,2,3,2,1,2,3) 16975
48 4+18+26 (1,1,1,1,2,3,2,1,2,3) 16975
lpsolve was halted after 180 minutes.
Table
6. Testing time for d695 obtained for 3.
Table
7 presents the system testing times for SOC p93791 obtained
using two TAMs. We halted the ILP solver after 1 hour for
each value of W and tabulated the best results obtained. This was
done to determine whether an efficient partition of TAM width and
the corresponding testing time can be obtained using the ILP model
within a reasonable execution time. In the next section, we present
the optimal testing times obtained for p93791 using a new enumerative
methodology, and show that the testing times obtained in Table
6 and Table 7 using the ILP model for PPAW are indeed either
optimal or close to optimal.
Testing
28 9+19 1119160
36 9+27 924909
Testing
44 9+35 873276
48 9+39 835526
52 9+43 807909
Table
7. Testing time for p93791 obtained with 2.
7 Enumerative TAM sizing
In Section 6, we showed that TAM optimization can be carried
out using an ILP model for the PPAW problem. However, ILP is
in itself an NP-hard problem, and execution times can get high for
large SOCs. A faster algorithm for TAM optimization that produces
optimal results in short execution times is clearly needed. In Section
5, it was observed that the execution time of the model for Problem
PAW was less than 1 second in all cases. We next demonstrate
how the short execution time of this ILP model can be exploited
to construct a series of PAW models that are solved to address the
PPAW problem.
The pseudocode for an enumerative algorithm for PPAW that explicitly
enumerates the unique partitions of W among the individual
TAMs is presented in Figure 7.
Procedure PPAW enumerate()
1. Let
2. number of TAMs
3. While all unique partitions of W have not been enumerated
4. For TAM
5. For TAM width w
(j 1)
6. Create an ILP model for PA for the TAM widths
using Design wrapper
7. Determine core assignment and testing time
8. Record the TAM design for the minimum testing time
Figure
7. Algorithm for enumerative TAM design.
The ILP models generated for each value of W in line 6 of
PPAW enumerate are solved and the TAM width partition and core
assignment delivering the best testing time are recorded. The solution
obtained using PPAW enumerate is always optimal, because
we generate all unique TAM width partitions, and then choose the
solution with the lowest cost. Since lines 6 and 7 each take less
than a second to execute, the execution time for PPAW enumerate
is at most 2 pB (W ) seconds, where pB (W ) is the number of partitions
of W among B TAMs enumerated in line 3. The problem of
determining the number of partitions pB (W ) for a given choice of
B TAMs can be addressed using partition theory in combinatorial
mathematics [15]. In [15], pB (W ) is shown to be approximately
B!(B 1)! for W 7! 1. For
W , since there are
only
W unique ways of dividing an integer W into two smaller
integers w1 and w2 . Thus PPAW enumerate obtains the optimal
solution for the PPAW problem for
than 64 seconds. For 3, the number of partitions work out
to p3(W
W (3i+1) . From the above formula, the
value of C for is found to be 341. There-
fore, the execution time of PPAW enumerate for
bounded by 682 seconds or 11.6 minutes. This execution time is
clearly reasonable, even for large W . In our experiments, we used a
Sun Ultra 80, which solved the PA models in well under a second of
execution time. The time taken for PPAW enumerate was therefore
significantly lower than the upper bound of 2 pB (W ) seconds even
for large values of W .
We used PPAW enumerate to obtain the optimal TAM width assignment
and minimal testing time for
d695 and p93791. Results for SOC d695 are presented in Table 8.
While the testing time for 3 is always less than the testing time
the difference between
widens for larger W . This can be explained as follows. For smaller
values of W , each individual TAM for
the testing time on each individual TAM increases sharply, as was
observed earlier in Figure 3.
Exec. Exec.
time
28
44 10+34 20094 2 4+18+22 16975
48
Table
8. Results for SOC d695 using PPAW enumerate.
Table
9 presents optimal results for enumerative TAM optimization
for the p93791 SOC. Comparing these results with those presented
in Table 7, we note that the results in Table 7 are indeed close
to optimal. For example, for the testing time presented in
Table
7 is only 5% higher than optimal. Note that for both SOCs,
the execution time for PAW is under 1 second. Hence similar execution
times for PPAW enumerate are obtained for SOCs d695 and
p93791. These execution times are significantly lower than those in
Tables
5 and 6.
Exec. Exec.
time
28 5+23 1031200 1 2+3+23 1030920 13
44 21+23 711256 2 5+16+23 659856 33
48 23+25 634488 2 9+16+23 602613 42
Table
9. Results for SOC p93791 using PPAW enumerate.
We compared the optimal testing times presented in Tables 8
and 9 with the testing times obtained using an equal partition of
W among the B TAMs. The testing time using an optimal partition
of W was significantly lower than that obtained using an equal
partition for all values of W . For example, for
a partition of (w1 ; testing time of
clock cycles, which is an increase of 28.6% over the testing
time of 475598 clock cycles obtained using an optimal partition of
The execution time of PPAW enumerate is smaller than that of
the ILP model in Section 6 because the number of enumerations for
two and three TAMs is reasonable. However, when TAM optimization
is carried out for a larger number of TAMs that have a larger
number of partitions of W , the ILP model for PPAW is likely to be
more efficient in terms of execution time. In addition, the ILP model
presented in Section 6 is likely to be more efficient when constraints
arising from place-and-route and power issues are included in TAM
optimization [4].
8 General problem of wrapper/TAM co-
optimization
In the previous sections, we presented a series of problems in test
wrapper and TAM design, each of which was a generalized version
of the problem preceding it. In this section, we present PNPAW , the
more general problem of wrapper/TAM optimization that the problems
of the preceding sections lead up to. We also show how solutions
to the previous problems can be used to formulate a solution
for this general problem.
The general problem can be stated as follows.
Given an SOC having N cores and a total TAM
width W , determine the number of TAMs, a partition of W
among the TAMs, an assignment of cores to TAMs, and a
wrapper design for each core, such that the total testing time is
minimized.
We use the method of restriction to prove that PNPAW is NP-
hard. We first define a new Problem PNPAW 1
, which consists of
only those instances of PNPAW for which (i)
cores on the SOC have a single internal scan chain and no functional
terminals. Hence, each core will have the same testing time on a
1-bit TAM as on a 2-bit TAM. An optimal solution to PNPAW 1
will
therefore always result in two TAMs of width one bit each. Problem
reduces to that of partitioning the set C of cores on
the SOC into two subsets C1 and C C1 , such that each subset
is assigned to a separate 1-bit TAM, and the difference between the
sum of the testing times of the cores (on the first 1-bit TAM) and
the sum of the testing times of the cores (on the 2nd 1-bit TAM) is
minimized. Formally, the optimization cost function for PNPAW 1
can be written as:
Objective: Minimize
is the testing
time of core c on a 1-bit TAM.
Next, consider the Partition problem [7], whose optimization
variant can be stated as follows.
Partition: Given a finite set A and a size s(a) 2 Z + for each
element a 2 A, determine a partition of A into two subsets A1
and A A1 , such that
s(a)
s(a) is minimized.
That Problem PNPAW 1
is equivalent to Partition can be established
by the following four mappings: (i) C A, (ii) C1 A1 ,
s(a). Since Partition is
known to be NP-hard [7], PNPAW 1
and PNPAW must also be
NP-hard.
To solve PNPAW , we enumerate solutions for PPAW over several
values of B. For each value of W , the optimal number of TAMs,
TAM width partition, core assignment, and wrapper designs for the
cores are obtained. The solutions to PPAW for d695 for values of
B ranging from 2 to 8 are illustrated in Figures 8 (a) and 8 (b) for
W values of 12, and 16 bits, respectively. In each Figure, we observe
that as B is increased from 2, the testing time decreases until a
minimum value is reached at a particular value of B, after which the
testing time stops decreasing and starts increasing as B is increased
further. This is because for larger B, the width per TAM is small
and testing time on each TAM increases significantly.
Number of TAMs
Testing
time
(1000
clock
cycles)
Number of TAMs
842.547.552.557.5Testing
time
(1000
clock
cycles)
(a)
Figure
8. Testing time for d695 obtained with increasing
values of B.
We next present a conjecture that formalizes the observation
made in Figures 8(a) and 8(b).
Conjecture 1 Let T (S; W;B) denote the optimal testing time for
and a total TAM width of W . If
We conjecture that during the execution of PNPAW enumerate, if
at a certain value of B, the testing time is greater than or equal to the
testing time at the previous value of B for the same total TAM width
W , then the enumeration procedure can be halted and the optimal
value of B recorded. Therefore, Conjecture 1 can be used to prune
the search space for the optimal wrapper/TAM design. Since the
execution time of PNPAW enumerate is particularly high for large
values of B, we can achieve significant speed-ups in TAM optimization
by halting the enumeration as soon as the minimum value of T
is reached.
Based on Conjecture 1, we executed PNPAW enumerate for several
values of W . In Table 10, we present the best testing times
obtained for d695 for the values of W . For each value of W , the
number of TAMs, width partition, testing time, and core assignment
providing the minimum testing time is shown.
TAM Optimum Optimal Optimal Optimum
width number width core testing
W of TAMs partition asignment time
28 5 1+2+8+8+9 (4,2,4,2,3,5,4,1,5,4) 24197
Table
results obtained
for d695 for several values of W .
9 Conclusion
We have investigated the problem of test wrapper/TAM co-optimization
for SOCs, based on the test bus model of TAM design.
In particular, we have formally defined the problem of determining
the number of TAMs, a partition of the total TAM width among the
TAMs, an assignment of cores to TAMs, and a wrapper design for
each core, such that SOC testing time is minimized. To address
this problem, we have formulated three incremental problems in test
wrapper and TAM optimization that serve as stepping-stones to the
more general problem stated above. We have proposed an efficient
heuristic algorithm based on BFD for the wrapper design problem
PW that minimizes testing time and TAM width. For PAW , the
problem of determining core assignments and wrapper designs, we
have formulated an ILP model that results in optimal solutions in
short execution times. We have formulated an ILP model to solve
PPAW , the problem of TAM width partioning that PAW leads up
to. This ILP model was solved to obtain optimal TAM designs
for reasonably-sized problem instances. We have also presented a
new enumerative approach for PPAW that offers significant reductions
in the execution time. Finally, we have defined a new wrap-
per/TAM design problem, PNPAW , in which the number of TAMs
to be designed must be determined. PNPAW is the final step in
our progression of incremental wrapper/TAM design problems, and
it includes PW , PAW and PPAW . An enumerative algorithm to
solve PNPAW has been proposed, in which the search space can
be pruned significantly when no further improvement to testing time
would result.
We have applied our TAM optimization models to a realistic example
SOC as well as to an industrial SOC; the experimental results
demonstrate the feasibility of the proposed techniques. To the
best of our knowledge, this is the first reported attempt at integrated
wrapper/TAM co-optimization that has been applied to an industrial
SOC.
In future work, we intend to extend our TAM optimization models
to include several other TAM configurations, including daisy-chained
cores on TAMs [1] and "forked and merged" TAMs [5].
We intend to extend our models, such that multiple wrappers on the
same TAM are active in the test data transfer mode at the same time;
this will allow us to address the problems of both testing hierarchical
cores, as well as Extest. While ILP is a useful optimization
tool for reasonably-sized problem instances, execution times can increase
significantly for complex SOCs and large values of B. This
is also true of our enumerative approach to Problems PPAW and
PNPAW . We are in the process of designing heuristic algorithms
for each of the problems formulated in this paper that can efficiently
address wrapper/TAM co-optimization for large TAM widths as well
as large numbers of TAMs. Furthermore, we plan to add constraints
related to power dissipation, routing complexity and layout area to
our TAM optimization models.
Acknowledgements
The authors thank Harry van Herten and Erwin Waterlander for
their help with providing data for the Philips SOC p93791, Henk
Hollmann and Wil Schilders for their help with partition theory, and
Jan Korst for his help with the NP-hardness proof for PNPAW . We
also thank Sandeep Koranne and Harald Vranken for their constructive
review comments on earlier versions of this paper.
--R
Scan chain design for test time reduction in core-based ICs
lpsolve 3.0
Design of system-on-a-chip test access architectures using integer linear programming
Design of system-on-a-chip test access architectures under place-and-route and power constraints
Optimal test access architectures for system-on-a- chip
Computers and Intractability: A Guide to the Theory of NP-Completeness
A fast and low cost testing technique for core-based system-on-chip
Testing re-usable IP: A case study
IEEE P1500 Standard for Embedded Core Test.
Direct access test scheme - Design of block and core cells for embedded ASICs
Test wrapper and test access mechanism co-optimization for system-on-chip
An integrated system-on-chip test framework
A course in combinatorics
A structured and scalable mechanism for test access to embedded reusable cores.
Wrapper design for embedded core test.
On using IEEE P1500 SECT for test plug-n-play
An ILP formulation to optimize test access mechanism in system-on-chip testing
Using partial isolation rings to test core-based designs
A structured test re-use methodology for core-based system chips
Model Building in Mathematical Programming.
Testing embedded-core-based system chips
--TR
--CTR
Feng Jianhua , Long Jieyi , Xu Wenhua , Ye Hongfei, An improved test access mechanism structure and optimization technique in system-on-chip, Proceedings of the 2005 conference on Asia South Pacific design automation, January 18-21, 2005, Shanghai, China
Qiang Xu , Nicola Nicolici , Krishnendu Chakrabarty, Multi-frequency wrapper design and optimization for embedded cores under average power constraints, Proceedings of the 42nd annual conference on Design automation, June 13-17, 2005, San Diego, California, USA
Tomokazu Yoneda , Kimihiko Masuda , Hideo Fujiwara, Power-constrained test scheduling for multi-clock domain SoCs, Proceedings of the conference on Design, automation and test in Europe: Proceedings, March 06-10, 2006, Munich, Germany
Tomokazu Yoneda , Masahiro Imanishi , Hideo Fujiwara, Interactive presentation: An SoC test scheduling algorithm using reconfigurable union wrappers, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
test scheduling with reconfigurable core wrappers, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.14 n.3, p.305-309, March 2006
Julien Pouget , Erik Larsson , Zebo Peng, Multiple-constraint driven system-on-chip test time optimization, Journal of Electronic Testing: Theory and Applications, v.21 n.6, p.599-611, December 2005
Anuja Sehgal , Sandeep Kumar Goel , Erik Jan Marinissen , Krishnendu Chakrabarty, Hierarchy-aware and area-efficient test infrastructure design for core-based system chips, Proceedings of the conference on Design, automation and test in Europe: Proceedings, March 06-10, 2006, Munich, Germany
Sudarshan Bahukudumbi , Krishnendu Chakrabarty, Wafer-level modular testing of core-based SoCs, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.15 n.10, p.1144-1154, October 2007
Anuja Sehgal , Krishnendu Chakrabarty, Efficient Modular Testing of SOCs Using Dual-Speed TAM Architectures, Proceedings of the conference on Design, automation and test in Europe, p.10422, February 16-20, 2004
Anuja Sehgal , Vikram Iyengar , Mark D. Krasniewski , Krishnendu Chakrabarty, Test cost reduction for SOCs using virtual TAMs and lagrange multipliers, Proceedings of the 40th conference on Design automation, June 02-06, 2003, Anaheim, CA, USA
Zhanglei Wang , Krishnendu Chakrabarty , Seongmoon Wang, SoC testing using LFSR reseeding, and scan-slice-based TAM optimization and test scheduling, Proceedings of the conference on Design, automation and test in Europe, April 16-20, 2007, Nice, France
Sandeep Kumar Goel , Erik Jan Marinissen, Layout-Driven SOC Test Architecture Design for Test Time and Wire Length Minimization, Proceedings of the conference on Design, Automation and Test in Europe, p.10738, March 03-07,
Sandeep Kumar Goel , Kuoshu Chiu , Erik Jan Marinissen , Toan Nguyen , Steven Oostdijk, Test Infrastructure Design for the Nexperia" Home Platform PNX8550 System Chip, Proceedings of the conference on Design, automation and test in Europe, p.30108, February 16-20, 2004
Jan Marinissen , Rohit Kapur , Maurice Lousberg , Teresa McLaurin , Mike Ricchetti , Yervant Zorian, On IEEE P1500's Standard for Embedded Core Test, Journal of Electronic Testing: Theory and Applications, v.18 n.4-5, p.365-383, August-October 2002
Anuja Sehgal , Sule Ozev , Krishnendu Chakrabarty, TAM Optimization for Mixed-Signal SOCs using Analog Test Wrappers, Proceedings of the IEEE/ACM international conference on Computer-aided design, p.95, November 09-13,
Sandeep Kumar Goel , Erik Jan Marinissen, A Test Time Reduction Algorithm for Test Architecture Design for Core-Based System Chips, Journal of Electronic Testing: Theory and Applications, v.19 n.4, p.425-435, August
Qiang Xu , Nicola Nicolici, Delay Fault Testing of Core-Based Systems-on-a-Chip, Proceedings of the conference on Design, Automation and Test in Europe, p.10744, March 03-07,
Sandeep Kumar Goel , Erik Jan Marinissen, On-Chip Test Infrastructure Design for Optimal Multi-Site Testing of System Chips, Proceedings of the conference on Design, Automation and Test in Europe, p.44-49, March 07-11, 2005
Vikram Iyengar , Krishnendu Chakrabarty , Erik Jan Marinissen, Wrapper/TAM co-optimization, constraint-driven test scheduling, and tester data volume reduction for SOCs, Proceedings of the 39th conference on Design automation, June 10-14, 2002, New Orleans, Louisiana, USA
A. Sehgal , K. Chakrabarty, Test planning for the effective utilization of port-scalable testers for heterogeneous core-based SOCs, Proceedings of the 2005 IEEE/ACM International conference on Computer-aided design, p.88-93, November 06-10, 2005, San Jose, CA
Qiang Xu , Nicola Nicolici, Modular and rapid testing of SOCs with unwrapped logic blocks, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.13 n.11, p.1275-1285, November 2005
Vikram Iyengar , Krishnendu Chakrabarty , Erik Jan Marinissen, Test Access Mechanism Optimization, Test Scheduling, and Tester Data Volume Reduction for System-on-Chip, IEEE Transactions on Computers, v.52 n.12, p.1619-1632, December
Sandeep Kumar Goel , Erik Jan Marinissen, SOC test architecture design for efficient utilization of test bandwidth, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.8 n.4, p.399-429, October
Anuja Sehgal , Fang Liu , Sule Ozev , Krishnendu Chakrabarty, Test Planning for Mixed-Signal SOCs with Wrapped Analog Cores, Proceedings of the conference on Design, Automation and Test in Europe, p.50-55, March 07-11, 2005
Matthew W. Heath , Wayne P. Burleson , Ian G. Harris, Synchro-Tokens: Eliminating Nondeterminism to Enable Chip-Level Test of Globally-Asynchronous Locally-Synchronous SoC's, Proceedings of the conference on Design, automation and test in Europe, p.10410, February 16-20, 2004
Zahra S. Ebadi , Alireza N. Avanaki , Resve Saleh , Andre Ivanov, Design and implementation of reconfigurable and flexible test access mechanism for system-on-chip, Integration, the VLSI Journal, v.40 n.2, p.149-160, February, 2007
Jan Marinissen, The Role of Test Protocols in Automated Test Generation for Embedded-Core-Based System ICs, Journal of Electronic Testing: Theory and Applications, v.18 n.4-5, p.435-454, August-October 2002
Anuja Sehgal , Sule Ozev , Krishnendu Chakrabarty, Test infrastructure design for mixed-signal SOCs with wrapped analog cores, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.14 n.3, p.292-304, March 2006 | embedded core testing;testing time;integer linear programming;test access mechanism TAM;test wrapper |
609176 | Retraction Approach to CPS Transform. | We study the continuation passing style (CPS) transform and its generalization, the computational transform, in which the notion of computation is generalized from continuation passing to an arbitrary one. To establish a relation between direct style and continuation passing style interpretation of sequential call-by-value programs, we prove the Retraction Theorem which says that a lambda term can be recovered from its CPS form via a -definable retraction. The Retraction Theorem is proved in the logic of computational lambda calculus for the simply typable terms. | Introduction
The notions of a continuation and a continuation passing style (CPS) transform
have been introduced by a number of authors (see [Rey93] for a historical
overview). The main motivation for the independent developments of
these concepts seemed to be twofold: explaining the behavior of imperative
features in functional languages, and compilation of programs with higher
order procedures. Further research led to development of CPS denotational
semantics [SW74] (see also [Sto77]), and later categorical semantics of computations
[Mog89], as well as compilers based on the CPS transform [Ste78]
(see also [App92]). In both kinds of applications one of the central goals of
This research was supported in part by: "Types and Algorithms", Office of Naval
Research (N00014-93-1-1015), "Computational efficiency of optimal reduction in lambda
calculus", National Science Foundation (CDA-9504288), and "Logic, Complexity, and
Programming Languages", National Science Foundation (CCR-9216185).
the research has been to establish a relationship between original terms and
their images under the transform.
In this work we view the CPS transform as a formalization of the (contin-
uation passing style) denotational semantics of a call-by-value programming
language in the fij lambda calculus - n .
To model call-by-value evaluation in a programming language we choose
Moggi's [Mog88] computational lambda calculus, - c , for the two reasons: 1)
the logic of - c is sound for call-by-value reasoning, and 2) the logic of - c is
complete for the class of models (computational lambda models [Mog88]) in
which most commonly used computational effects can be expressed.
One way of asserting the correctness of the CPS transform, as an interpretation
of - c in - n , is the equational correspondence result due to Sabry
and Felleisen [SF92].
Theorem 1.1 (Sabry-Felleisen). For any two lambda terms M and N ,
The left-to-right implication in this theorem says that the CPS transform
preserves equality, and the right-to-left implication says that the transform
also preserves distinctions. Thus, the transform gives an accurate picture of
-equivalence in - n .
To formalize the problem we are trying to solve we observe that the
left-to-right implication of the theorem also says that the CPS transform
defines a function, T , mapping the - c -equivalence classes of lambda terms
to the - n -equivalence classes of lambda terms. The right-to-left implication
can be understood as saying that T is injective, and therefore has a left
We ask the question whether functions T , or its inverse T
are definable. More precisely,
ffl is there a lambda term P , such that - n ' (P
ffl is there a lambda term R, such that - c ' (R
An elementary argument given in [MR] shows that the answer to the first
question must be "no". In this work we give an affirmative answer to the
second question. More precisely we prove the following theorem:
Theorem 1.2 (Retraction, for the CPS transform). For any simple
type oe there is a lambda term R oe , such that for all closed lambda terms M
of type oe,
(R oe
A version of the above Retraction Theorem was proven by Meyer and
Wand [MW85], where the conclusion of the theorem holds in the logic of - n .
However, as the authors themselves have pointed out to us, their result is
misleading. We are interested in behavior of a term M under a call-by-value
evaluation, and - n is not sound for call-by-value reasoning in presence of
any computational effects.
An interesting point about the CPS transform, viewed as an interpretation
of a call-by-value programs, is that not only it can interpret pure
functional programs, but can also be extended to interpret programs with
control operators such as call/cc and abort. Different extensions of a
functional language with "impure" features can also be given denotational
semantics using a similar transform. For example, an interpretation of programs
in a language with mutable store can be given using the state passing
style (SPS) transform. As shown by Moggi [Mog88], a number of such computational
effects can be described by the notion of a monad, and the CPS
and the SPS transforms can be generalized to, what we call, the computational
transform [Wad90, SW96].
The equational correspondence for the computational transform holds
as well [SW96], and it is natural to ask whether the Retraction Theorem
(Theorem 1.2) generalizes. However, the computational transform maps
lambda terms to the terms of the "monadic metalanguage", - ml [Mog91].
The language of - ml is extended with new constructs that the logic of the
computational lambda calculus has no axioms for, so the question of whether
there is a lambda term, R, (even in the language of - ml ) such that
denotes the computational transform of M , is ill formed. In
order to study the Retraction Theorem in an abstract setting that can be
applied to other transforms, as well as to the CPS transform, we define
a modified computational transform, T \Pi , mapping lambda terms to lambda
terms extended with two constants E and R that satisfy axiom
(R
The modified computational transform satisfies the equational correspondence
result for the closed terms, and we prove the Retraction Theorem in
the logic of - c extended with the axiom (r-e).
Theorem 1.3 (Retraction, for T \Pi ). For any simple type oe there is a term
R oe , such that for all closed lambda terms M of type oe,
(R oe M \Pi
where M \Pi stands for the modified computational transform of M .
The proof of the above theorem consists of defining interpretations of
types and terms, as well as an "-relation between the interpretations, and
proving that if a term M has a type oe, then the meaning of M and the
meaning of oe are related by the "-relation. This framework is in many
ways similar to, and was inspired by, the type inference models developed
in [Mit88].
Even though the transforms of interests, namely the CPS transform and
the SPS transform, are not special cases of the modified computational trans-
form, we benefit from studying the abstract transform in that we obtain a
proof that does not depend on details of a particular transform, and can
be applied, by modifying definitions appropriately, to either the CPS or the
SPS transform. Hence, we obtain the Retraction Theorems for the CPS and
the SPS transforms.
The fore mentioned results apply only to simply typed closed terms. To
extend the applicability of these results we can proceed in two directions: we
can extend the computational lambda calculus with new language constructs
and axioms that define equational behavior of the new terms, and we can
extend the type system so that our results apply to a larger class of terms.
Our results can be easily extended to a calculus, extending - c , with
a datatype such as natural numbers and primitive operators on natural
numbers. A more important class of extensions consists of those extensions
which introduce a computational effect to - c . We have been able to extend
the Retraction Theorem to - c extended with a divergent element. However,
we stop short of proving the Retraction Theorem for - c terms extended with
recursion.
In an attempt to prove the Retraction Theorem for all (untyped) closed
terms, we extend the type system with recursive types and prove the following
result. Call a term F a total function if for any value V , F (V ) is
c -equivalent to a value.
Theorem 1.4. Assume terms e and r exist such that e and r are total
functions satisfying (in - c +(r-e))
(R (y (e x))))
Then for every term M ,
(R M \Pi
Analogous theorems also hold for the CPS and the SPS transforms. The
assumption of the above theorem is quite strong, and it remains to be seen
whether such terms e and r exist. One should also investigate whether such
elements exist in models that would allow interesting applications of the
theorem.
We assume the reader is familiar with elementary concepts of a lambda
calculus. For details one is referred to [Bar84]. In this section we will
provide concise definitions in order to disambiguate our notation.
2.1 Lambda calculus
Lambda terms are terms formed over an infinite set of variables by lambda
abstraction and application. We will use a number of standard conventions
when writing lambda terms, such as that application associates to the left,
and in general, use parentheses freely to make terms easier to read. We will
(let x=M in N) to abbreviate the term ((-x:N) M ), and M ffi N for
the term -x:(M (N x)). For most part (but not exclusively) we use letters
M , N , P , etc. to range over arbitrary lambda terms, and letters U , V
and W to range over values, that is lambda terms that are either variables,
lambda abstractions or constants. Lower-case letters, x, y, z etc. will be
used for variables.
We study provable equality between untyped lambda terms. If - is a set
of axions we if the equation can be derived using
the rules of lambda congruence from the axioms in -.
(let x=M in
Table
1: Axioms of - c .
For most part of this paper we consider equalities provable in Moggi's
Table
1), possibly extended with additional axioms for constants. In
particular we will use constants E and R satisfying axiom
(R
2.2 Typing system
We consider a type system for assigning types to untyped lambda terms.
Simple types are defined over a base type -, i.e. - is a type and oe!- is a
type whenever oe and - are types. The type inference system consists of set
of rules (given in Table 2) for deriving sequents of the form \Gamma . M : oe, where
oe is called a typing assertion and \Gamma is a typing hypothesis, i.e. a set of
typing assertions of the form x where we always assume no variable x i
occurs more than once in \Gamma.
\Gamma; x: oe . x: oe (var)
. (M N
(app)
\Gamma; x: oe .
(abs)
Table
2: Type inference rules for simple types.
2.3 Transforms
In this work we concentrate on three transforms mapping lambda terms to
lambda terms. First we study the modified computational transform, T \Pi ,
mapping pure lambda terms to lambda terms extended with two constants
E and R (see Table 3). The transform T \Pi can be viewed as an abstract transform
which captures, for our purposes, important properties of the CPS and
the SPS transforms. Namely, in the transform M \Pi of a term M , the order
of evaluation (left-to-right and call-by-value) is made explicit. However, the
additional structure that makes, in particular, the CPS transform attractive
to compiler designers, is not reflected in the definition of T \Pi .
Table
3: The modified computational transform, T \Pi .
The next transform we study is the call-by-value version of the Fischer-
Reynolds CPS transform. The definition we use (as well as the overline
is taken from [Plo75] (see Table 4).
Analogous to the CPS transform, used to give a denotational semantics
of programs with mutable store, instead of control operators, is the state
passing style transform (SPS). In the definition, given in Table 5, we used
the pairing constructs as abbreviations. Namely, (let hx 1
i=M in N)
Table
4: The CPS transform.
abbreviates the expression
(let x=M in (let x 1 =- 1
(x) in (let x 2 =- 2
(x) in N))):
x
Table
5: The SPS transform.
Even though we give untyped definitions of the transforms, we believe the
transforms should be understood in the context of a typed language. This
view is supported by the monadic framework, developed by Moggi [Mog88],
in which programs are interpreted as "computations". We leave out the
details of Moggi's monadic interpretation, as well as the definitions of the
typed transforms, since they are not central to our development, but rather
sketch the intuitive picture to help motivate some of our definitions.
We think of a transform mapping terms of type oe (intuitively programs)
to terms of type T (oe 0 ) (intuitively computations), where T is a unary type
constructor (that depends on the particular transform), and oe 0 is defined
inductively using T to be -
0). This can be
made precise by defining the typed version of the transform mapping typing
sequents to typing sequents.
3 Retraction Theorem
In this section we state and prove the Retraction Theorem for the abstract
transform, T \Pi , as well as, for the CPS and the SPS transforms. While one
might not find the transform T \Pi interesting in itself, for us it serves the
purpose. More precisely, we develop a framework which enables us to prove
the Retraction Theorem for the transform T \Pi , and is free of the details specific
to the CPS and the SPS transforms. Nevertheless, the framework can
be effortlessly modified to prove the Retraction Theorem for each of these
transforms. Thus we believe that focusing on the more abstract transform,
improves the clarity of our presentation.
3.1 Retractions
The Retraction Theorem asserts the definability of the inverse of the CPS
transform (as well as other transforms of interest). We now give a construction
of the type-indexed family of lambda terms that define the retraction.
Note that we give the following definitions using terms E and R, but the
definitions should be understood as parameterized by these terms. That is,
the inverse of the modified computational transform will be defined using
constants E and R, and in the definition of the inverse of the CPS transform,
these constants will be replaced by the terms E K and R K . Likewise for the
SPS transform, terms E S and R S will be used.
If we think of transforms in the context of a typed language, mapping
terms (of type oe) to terms representing computations (of type T (oe 0 )), in-
tuitively, we can understand the pairs of terms defined below as retraction-
embedding pairs between types oe and T (oe 0 ). One can also formally define
a notion of a type oe being a retract of a type - , and in the sense of such
definition, given below, we exhibit terms R oe and E oe that form a retraction-
embedding pair between types oe and T (oe 0 ). Moreover, we will show that
(suitable versions of) terms R oe define the required inverses of the transforms
we study.
Definition 3.1. A type oe is said to be a retract of a type - if there is a pair
of lambda terms R oe;- and E oe;- of types -!oe and oe!- respectively, such
that (R oe;- (E oe;-
Definition 3.2. For each simple type, oe, define terms e oe and r oe inductively
on the structure of oe as follows:
and at higher types we have
r
(R (f (e
Lemma 3.3. - c +(r-e) ' (r oe (e oe
Proof: Easy by induction on oe.
Note: Above lemma holds whenever we replace constants E and R with any
values E 0 and R 0 such that - c ' (R 0
Finally we define terms R (R x)) and E
Later we will show that R oe is an inverse of the transform T \Pi . To define an
inverse of the CPS transform we first define terms R K def
-xk:(k x). It is easy to show (in - c ) that (R K (E K
the retraction-embedding pair (R K
oe using terms R K and E K instead of
constants R and E in the above definitions. Similarly for the SPS transform
we define terms
represents some initial state of the store, and define terms R S
oe and E S
oe using
these terms instead.
3.2 Interpretations of Terms and Types
The framework we develop to prove the Retraction Theorem is closely related
to Mitchell's type inference models [Mit88]. As in the definition of a type
inference model we define interpretations of terms and types, as well as
a relation between the two. However, the definition of a type inference
model assumes one works with the full fij-equality, and we need to relax the
definitions to accommodate reasoning in the weaker logic of - c .
We will first sketch the definitions for the general framework and then
fill in the details that apply to each particular transform we study.
Interpretation of terms: Assume (D; App) is an applicative structure,
that is, D is a set and App is a binary operation on D. Assume also that
there is a distinguished subset, V(D) ' D, of values. Given an environment,
ae, mapping variables to V(D) we define an interpretation (relative to ae) that
to each term M assigns an element JMKae of D. In addition we assume that
In particular we will define D to be the set of equivalence classes of terms
and the set of values to be the equivalence classes of terms that are values.
(Note: below, we may be informal and identify terms with their equivalence
classes.) The interpretation function will then be defined by the transform
in consideration.
Interpretation of types: The types will be interpreted as certain subsets
of D called type sets.
In particular we will chose type sets to contain only the equivalence
classes of terms of the form . The interpretation of
types will be defined (using the "-relation defined below) inductively:
set of all terms of the form
set of all M (of the form (E V )) such that 8N 2 JoeK:
(R - (R oe N))
Relating the interpretations: To relate meanings of terms and types,
instead of using the simple set-membership relation, we define an extended
2. This relation will in general depend on the
structure of the transform in consideration, and intuitively, will serve the
purpose of separating out "important" part of the transformed term.
Truth and validity: Having defined the above notions, we say that a
typing assertion M : oe is true (with respect to ae), written ae
JoeK. The notions of satisfaction and validity are defined in the
standard way relative to the definition of truth. Namely, ae satisfies a typing
hypothesis \Gamma, written ae every typing assertion in \Gamma is true with
respect to ae, and a typing sequent \Gamma . M : oe is valid,
is true for every ae that satisfies \Gamma.
Our aim is to prove a soundness lemma for the type inference system
that would imply the Retraction Theorem. Before we state the lemma, we
single out two conditions that are necessary for the lemma to hold. Namely
we require that for all M " Joe!-K and N " JoeK:
(R (R oe N)); (z)
where R oe , etc. are defined in Section 3.1 and equality is provable equality
in - c +(r-e). The main reason why these two conditions are singled out is
that, having proved the Retraction Theorem for the modified computational
transform, we modify the definition of the interpretations of terms and types,
as well as the "-relation, to reflect properties of the CPS transform, and
only the conditions (y) and (z) need to be proved again to show that the
Retraction Theorem holds for the CPS transform. Similarly for the SPS
transform.
3.3 Modified Computational Transform
In the previous section we outlined the definitions of our framework, and we
can now fill in the details. The following definitions are given, in particular,
to prove the Retraction Theorem for the modified computational transform,
but we will also indicate how these definitions need to be changed in the
subsequent sections to prove the Retraction Theorem for the CPS and the
SPS transforms.
We write "=" to denote an equality provable in - c +(r-e).
Definition 3.4. Let ae be a substitution mapping variables to values. Define
the interpretation function J \DeltaK to be
JMKae
Definition 3.5. Type sets are sets of terms of the form
a value.
Note: this definition is to be understood as parameterized by the term E ,
that is, when we consider, say, the CPS transform we will use the term E K
instead.
Definition 3.6. If M and N are terms, we define the application in the
codomain of the transform as follows:
It is easy to see that J(M
Definition 3.7. Let S be a type set and let M be a term. We write M " S
if there is a value V and terms P 1
and
Definition 3.8. The interpretation of types is defined inductively on the
structure of type expressions. Namely,
set of all terms of the form
set of all M , of the form (E V ), such that 8N 2 JoeK,
(R - (R oe N))
Recall that R
Note: We should understand this definition as parameterized by E , R, App,
and ". When we consider the CPS or the SPS transform, the appropriate
definitions will be used instead.
Equipped with the definitions we can prove the Soundness Lemma, but
first we need some auxiliary results.
Lemma 3.9. For each type oe and any value V ,
Proof: Easy by induction on oe.
Lemma 3.10. For any two terms M and N such that M " Joe!-K and
and
(R (R oe N)):
Proof: First observe that the statement of this lemma is stronger than what
is given in Definition 3.8 above. The definition only requires that the two
statements hold only for N 2 JoeK, and in the lemma we show that these two
statements hold for all N " JoeK. Both conditions can be easily proved using
the definitions and axioms of - c +(r-e).
The Soundness Lemma consists of two parts, (S.1) and (S.2). The first part
asserts the soundness of our interpretation with respect to typing rules of the
simply typed lambda calculus, and the second part is, in fact, the statement
of the Retraction Theorem.
Lemma 3.11 (Soundness). Let \Gamma be a typing hypothesis and let ae be a
substitution that satisfies \Gamma. Let ae 0 be a substitution such that for each
and
(R oe ae(M \Pi
Proof: We prove the lemma by induction on the derivation of \Gamma . M : oe.
The (var) case follows by assumptions, and the (app) case follows directly
by Lemma 3.10 and induction hypotheses.
The (abs) case is slightly more involved. Assume M j -x:N and oe j
, and that \Gamma . -x:N
was derived from \Gamma; x: - 1
using the
(abs) rule.
To show (S.1), first observe that ae((-x:N) \Pi so we need
to show that ae((-x:N) \Pi any term in J- 1 K. Then
and therefore since aefV=xg satisfies \Gamma; x: - 1 , by induction hypothesis (S.1),
K. Moreover we can compute
(R -2 App(ae((-x:N) \Pi (R
using the definition of the retraction-embedding pairs, and both induction
hypotheses (S.1) and (S.2). This establishes (S.1). To show (S.2) simply
compute
(R
using properties of retraction-embedding pairs, Lemma 3.9 and induction
hypothesis (S.2).
The Retraction Theorem follows directly from this lemma.
Theorem 3.12 (Retraction, for T \Pi ). For any closed term M of (simple)
type oe,
(R oe M \Pi
3.4 The CPS Transform
To prove the the Retraction Theorem for the CPS transform, as indicated be-
fore, we will use exactly the same framework, but will modify the definitions
using appropriate definitions of application in the codomain of the transform
and "-relation. First recall that we define the retraction-embedding
(R to be
It is easy to see that - c ' (R K (E K which is the only abstract
property of E and R we use.
We define the interpretation of terms (relative to a substitution ae) using
the CPS transform, that is,
JMKae
The application in the codomain of the transform is defined to be
so that (M Finally, we define the extended membership
relation, " K , to be
for some terms P i , a value V such that and a fresh variable k.
It should be understood that all the definitions used in the preceding
section are now defined using E K , R K , App K and " K , instead of E , R, App
and ".
With the new definitions, we prove the following lemma, analogous to
Lemma 3.10, asserting that conditions (y) and (z) hold.
Lemma 3.13. For any two terms M and N such that M " K
Joe!-K and
App K (M;
and
(R K
oe!- M) (R K
oe
Proof: The proof is straightforward using the definitions and axioms of - c .
Having shown the above lemma, the rest of the proof of the Soundness
Lemma for the CPS transform is exactly the same as in the case of the
modified computational transform, and as a corollary we obtain the retraction
result.
Theorem 3.14 (Retraction, for the CPS transform). For any closed
term M of (simple) type oe,
(R K
3.5 The SPS Transform
To adopt our framework to the SPS transform we define E S , R S , App S and
" S in place of E , R, App and ", and prove that the conditions (y) and (z)
still hold. Recall that the terms E S and R S are defined to be
where init is some initial state of the store.
The interpretation of terms is defined using the SPS transform. Namely,
JMKae
The application in the codomain of the transform is define to be
so that (M N) Finally we define the extended membership
for some terms P i , some value V such that
Furthermore, we interpret the definitions of type sets, interpretation of
types, and retraction-embedding pairs defined earlier, as if given using
in place of E , R, App and ". With the new definitions
we can show that conditions (y) and (z) are still satisfied. This yields the
Retraction Theorem for the SPS transform.
Theorem 3.15 (Retraction, for the SPS transform). For any closed
term M of (simple) type oe,
(R S
oe M
3.6 Extensions
Thus far we have only proved the retraction results for the pure simply
typed terms. In order to make these results more applicable we would like
to extend the theorems to a larger class of terms. We have essentially two
directions in which we can proceed. We can extend the class of terms by
adding constants or term constructors (including possibly new axioms that
define functional behavior of the new terms), and secondly, we can extend
the type system to one that can type a larger class of terms.
Extending the Retraction Theorem to an extension of - c with constants
of a base type and primitive operators such as numerals is quite straight-
forward. However, adding arbitrary constants of higher order types may be
more difficult. The difficulty lies in ensuring the closure conditions imposed
on type sets by the addition of such constants are satisfied. For example if a
constant c of type oe!- is added to - c , we need to make sure that if M " oe
then App(c \Pi ; M) " - . While such closure conditions are determined based
on the type of new constants, the proof they are satisfied will, in general,
depend on the functional behavior of the new constants.
Divergence: The difference between call-by-name and call-by-value evaluation
strategies becomes apparent only in presence of actual computational
effects. So far we have only considered pure simply typed terms. In this
setting every closed term is equivalent to a value in both logics of - c and - n .
Therefore, if we were to stop here, it would be unjustified to claim significant
improvement over the original Meyer-Wand Retraction Theorem.
The simplest computational effect we can add to the language is diver-
gence. In presence of divergence - n reasoning is no longer sound for call-
by-value languages, so for any applications in - c extended with divergence,
we really need the stronger version of the Retraction Theorem provable in
the weaker logic of - c . While extension of the Retraction Theorem to a
language with divergence, which we now present, is quite straightforward,
it is important since it illustrates the difference between Meyer and Wand's
and our formulation of the Retraction Theorem.
Divergence is represented by the divergent element, \Omega\Gamma that is added
to the language of - c as a constant, but it is not considered a value. The
axioms
for\Omega specify that an application diverges if either the operators or
the operand diverges. Moreover, these axioms identify all divergent terms.
The axioms
are:(\Omega
M)
=\Omega and (M \Omega\Gamma
One can verify that the resulting equational logic is consistent and that it
cannot
value V . The type system is extended with the
axiom
. \Omega\Gamma oe
which says
that\Omega has every type. The modified computational transform
is defined
on\Omega to be
has every type, to prove the Retraction Theorem for -
c+\Omega we need
to extend the Soundness Lemma for the case of typing
In other
words we need to show (S.1):
and (S.2): that (R
=\Omega
for every type oe. The second condition follows trivially from the definition
of\Omega \Pi and the axioms
To prove the first condition observe that
x=\Omega in (E (e oe x)));
and by Lemma 3.9, (E (e oe x)) 2 JoeK for every oe.
The very same reasoning can be applied to extend the Retraction Theorem
for the CPS transform to -
c+\Omega\Gamma where the CPS transform is defined
on\Omega to
-k:\Omega k:
Recursive types: It is well known that all terms can be typed using the
recursive type system. In order to extend the Retraction Theorem to all
closed terms we study the recursive types.
The recursive type discipline introduces types of the form -t:oe (where
we use t to denote a type variable). In order to extend our results to - c
extended with recursive types we need to define retraction-embedding pairs
oe ) at new types. In particular how does one define e -t:oe and r -t:oe , or
even e t and r t ?
To motivate a solution, consider the following example. Let -t:t!t.
Then in the recursive type discipline one can type .(-x:x x): - . Assume
we have defined terms e - and r - , and we try to compute
(R (-x:x x) \Pi (R
(R ((e - x) (e - x))))
To complete this derivation, one would like to have (e -
so we can continue
(R
What we see from this example is that the two occurrences of x in -x:x x
"act" as having types - and - , respectively. Similarly, we would like the
two occurrences of e - in -x:(r - (R ((e - x) (e - x)))) to "act" as e - and e - .
A solution to our problem is to find a uniform definition for e's and r's at
all types. Namely, we want a retraction-embedding pair (r; e) that satisfies
the following definition.
Definition 3.16. A term F is called a total function if F is a value and,
for any value V , provably equal to a value.
A pair of total functions (r; e) is a uniform retraction-embedding pair if
e and r satisfy system of equations
r
While it remains open whether there is a pair of terms satisfying the
above conditions, we will assume we are given such a pair of functions and,
under this assumption, show how the Retraction Theorem can be extended
to recursive types. Moreover, since the recursive type system can type all
terms, as a corollary we obtain the following theorem.
Theorem 3.17. Assume total functions e and r exist that satisfy equations
(?). Then, for any closed lambda term M ,
(R M \Pi
Of course, the analogous theorems hold for the other transforms as well.
Here we only sketch the main idea in the proof of the above theorem. A
detailed proof can be found in [Ku-c97].
The recursive type system extends the simple types by adding type variables
and type expressions of the form -t:oe. The new inference rules are
(- I)
E)
One can understand these rules by considering the type -t:oe as the type - ,
satisfying equation oef-=tg. Thus we need to define the interpretation,
J-K-, of - such that it satisfies the equation
In other words J-K- should be a fixed point of the function
-S:JoeK-fS=tg:
(We assume the interpretation will satisfy JoeK-fJ-K-=tg.) The
difficulty lies in showing that for any oe and -, the function -S:JoeK-fS=tg
always has a fixed point. To do so, we define a metric on the space of all
type sets so that the resulting metric space is complete. Then we show that
each function -S:JoeK-fS=tg is a contraction, and thus, by Banach's Fixed-point
Theorem, has a unique fixed point. Mac Queen et al. [MPS86] have
developed such a framework, of which our development can be viewed as
a special case. Namely, our domain consists only of finite elements (typ-
ing sequents) ordered under discrete order, thus greatly simplifying general
purpose structures used in [MPS86].
Concluding Remarks
In this work, we have established a relation between direct style and CPS
terms using definable retraction functions. The Retraction Theorem shows
that a term can be recovered, up to - c -equivalence, from its image under the
CPS transform. Therefore, the retraction approach, in fact, only provides a
relation between equivalence classes of terms. To contrast our results with
others that provide, perhaps even stronger relation between lambda terms
and their CPS forms (e.g. [SW96]), we should emphasize that the inverse
of the CPS transform we obtain is definable. Another important point is
that the conclusion of our version of the Retraction Theorem is an equation
provable in the logic of - c , which is a call-by-value logic, unlike the results
in [MW85] and [Fil94] which give similar equalities, but in a call-by-name
logic. As a consequence, our results are applicable even where call-by-name
reasoning is not sound.
Some open questions: In all practical applications, functional programming
languages are equipped with some form of recursion. Therefore, to
make the retraction approach applicable in practice, we need to extend our
results to a language with recursion. This can be done in two ways: By
extending the type system so that the fixed-point operator is definable in
the pure language, or by adding a language construct such as constant Y ,
letrec, etc. The first approach, with some partial results, has been discussed
One difficulty in adding fixed-point operator Y , or a similar language
construct, is that additional closure conditions are needed in the definition
of type sets, and we haven't been able to construct type sets satisfying these
conditions. The other difficulty is determining the correct axiomatization of
a fixed-point operator. It appears that the axiom
does not suffice. In models of - c , fixed-point operator can be defined using
the so called fixpoint object. Crole and Pitts [CP92] define such an object
in models of - c , and discuss a logical system for reasoning about fixpoint
computations, which may hold the answer to above questions.
Another class of extensions is motivated by the application of the Retraction
Theorem developed by Riecke and Viswanathan [RV95], where they
show how one can isolate effects of an extension of a language with assignment
or control from interfering with pure functional code. A natural
question arises, whether it is possible to extend this approach to isolate
one computational effect from interfering with code possibly containing a
different computational effect. For instance, if M is a program in, say call-
by-value PCF with assignment, can we define an operator, call it encap, so
that, in an extension of call-by-value PCF with both assignment and control,
(encap M) will behave the same as M behaves in the extension of call-by-
value PCF with assignment. We believe that an appropriate extension of the
Retraction Theorem to a programming language with imperative features
may give us such results.
--R
Compiling with Continuations.
The Lambda Calculus: Its Syntax and Se- mantics
New foundations for fixpoint computations: FIX-hyperdoctrines and the FIX-logic
Representing monads.
"Free Theorems"
type inference and containment.
Computational lambda-caluclus and monads
Computational lambda-caluclus and monads
Notions of computation and monads.
An ideal model for recursive polymorphic types.
Continuations may be unrea- sonable
Continuation semantics in typed lambda-calculi (summary)
The discoveries of continutions.
Isolating side effects in sequential languages.
Reasoning about programs in continuation-passing style
A compiler for Scheme.
Denotational Semantics: The Scott-Strachey Approach to Programming Language Theory
A mathematical semantics for handling full jumps.
A reflection on call-by-value
Comprehending monads.
--TR
--CTR
Andrzej Filinski, On the relations between monadic semantics, Theoretical Computer Science, v.375 n.1-3, p.41-75, May, 2007 | continuation passing style;continuations;CPS transform;monads;retractions |
609180 | A Syntactic Theory of Dynamic Binding. | Dynamic binding, which traditionally has always been associated with Lisp, is still semantically obscure to many. Even though most programming languages favour lexical scope, not only does dynamic binding remain an interesting and expressive programming technique in specialised circumstances, but also it is a key notion in formal semantics. This article presents a syntactic theory that enables the programmer to perform equational reasoning on programs using dynamic binding. The theory is proved to be sound and complete with respect to derivations allowed on programs in dynamic-environment passing style. From this theory, we derive a sequential evaluation function in a context-rewriting system. Then, we further refine the evaluation function in two popular implementation strategies: deep binding and shallow binding with value cells. Afterwards, following the saying that deep binding is suitable for parallel evaluation, we present the parallel evaluation function of a future-based functional language extended with constructs for dynamic binding. Finally, we exhibit the power and usefulness of dynamic binding in two different ways. First, we prove that dynamic binding adds expressiveness to a purely functional language. Second, we show that dynamic binding is an essential notion in semantics that can be used to define exceptions. | Introduction
Dynamic binding has traditionally been associated with Lisp dialects. It appeared in
McCarthy's Lisp 1.0 [24] as a bug and became a feature in all succeeding implemen-
tations, like for instance MacLisp 2 [28], Gnu Emacs Lisp [23]. Even modern dialects
of the language which favour lexical scoping provide some form of dynamic variables,
with special declarations in Common Lisp [43], or even simulate dynamic binding by
lexically-scoped variables as in MITScheme's fluid-let [18].
Lexical scope has now become the norm, not only in imperative languages, but also
in functional languages such as Scheme [39], Common Lisp [43], Standard ML [26], or
Haskell [21]. The scope of a name binding is the text where occurrences of this name refer
to the binding. Lexical scoping imposes that a variable in an expression refers to the
innermost lexically-enclosing construct declaring that variable. This rule implies that
nested declarations follow a block structure organisation. On the contrary, the scope of a
name is said to be indefinite [43] if references to it may occur anywhere in the program.
On the other hand, dynamic binding refers to a notion of dynamic extent. The
dynamic extent of an expression is the lifetime of this expression, starting and ending
when control enters and exits this expression. A dynamic binding is a binding which
exists and can only be used during the dynamic extent of an expression. A dynamic
variable refers to the latest active dynamic binding that exists for that variable [1]. The
expression dynamic scope is convenient to refer to the indefinite scope of a variable with
a dynamic extent [43].
Dynamic binding was initially defined by a meta-circular evaluator [24] and was
later formalised by a denotational semantics by Gordon [15, 16]. It is also part of the
This research was supported in part by EPSRC grant GR/K30773. Author's address: Department
of Electronics and Computer Science, University of Southampton, Southampton
SO17 1BJ. United Kingdom. E-mail: L.Moreau@ecs.soton.ac.uk.
At least, the interpreted mode.
folklore that there exists a translation, the dynamic-environment passing translation,
which translates programs using dynamic binding into programs using lexical binding
only [36, p. 180]. Like the continuation-passing transform [35], the dynamic-passing
translation adds an extra argument to each function, its dynamic environment, and
every reference to a dynamic variable is translated into a lookup in the current dynamic
environment.
The late eighties saw the apparition of "syntactic theories", a new semantic frame-work
which allows equational reasoning on programs using non-functional features like
first-class continuations and state [10, 11, 12, 44]. Those frameworks were later extended
to take into account parallel evaluation [9, 14, 29, 30]. The purpose of this paper
is to present a syntactic theory that allows the user to perform equational reasoning on
programs using dynamic binding. Our contribution is fivefold.
First, from the dynamic-environment passing translation, we construct an inverse
translation. Using Sabry and Felleisen's technique [40, 41], we derive a set of axioms
and define a calculus, which we prove to be sound and complete with respect to the
derivations accepted in dynamic-environment passing style (Section 3).
Second, we devise a sequential evaluation function, i.e. an algorithm, which we prove
to return a value whenever the calculus does so. The evaluation function, which relies
on a context-rewriting technique [11], is presented in Section 4.
Third, in order to strengthen our claim that dynamic binding is an expressive programming
technique and a useful notion in semantics, we give a formal proof of its
expressiveness and use it in the definition of exceptions. In Section 5, we define a relation
of observational equivalence using the evaluation function, and we prove that
dynamic binding adds expressiveness [8] to a purely functional programming language,
by establishing that dynamic binding cannot be macro-expressed in the call-by-value
lambda-calculus. In Section 6, we use dynamic binding as a semantic primitive to formalise
two different models of exceptions: non-resumable exceptions as in ML [26] and
resumable ones as in Common Lisp [43, 34].
Fourth, we refine our evaluation function in the strategy called deep binding , which
facilitates the creation and restoration of dynamic environments (Section 7).
Fifth, we extend our framework to parallel evaluation, based on the future construct
[14, 17, 30]. In Section 8, we define a parallel evaluation function which also relies on
the deep binding technique.
Before deriving our calculus, we further motivate our work by describing three broad
categories of use of dynamic binding: conciseness, control delimiters, and distributed
computing. Let us insist here and now that our purpose is not to denigrate the qualities
of lexical binding, which is the essence of abstraction by its block structure organisation,
but to present a syntactic theory that allows equational reasoning on dynamic binding,
to claim that dynamic binding is an expressive programming technique if used in a
sensible manner, and to show that dynamic binding can elegantly be used to define
semantics of other constructs. Let us note that dynamic binding is found not only in
Lisp but also in T E X [22], Perl [45], and Unix TM shells.
Practical Uses of Dynamic Binding
2.1 Conciseness
A typical use of dynamic binding is a printing routine print-number which requires the
basis in which the numbers should be displayed. One solution would be to pass an explicit
argument to each call to print-number. However, repeating such a programming
pattern across the whole program is the source of programming mistakes. In addition,
this solution is not scalable, because if later we require the print-number routine to
take an additional parameter indicating in which font numbers should be displayed, we
would have to modify the whole program.
Scheme I/O functions take an optional input/output port. The procedures with-
input-from-file and with-output-to-file [39] simulate dynamic binding for these
parameters.
Gnu Emacs [23] is an example of large program using dynamic variables. It contains
dynamic variables for the current buffer, the current window, the current cursor position,
which avoid to pass these parameters to all the functions that refer to them.
These examples illustrate Felleisen's conciseness conjecture [8], according to which
sensible use of expressive programming constructs can reduce programming patterns
in programs. In order to strengthen this observation, we prove that dynamic binding
actually adds expressiveness to a purely functional language in Section 5.
2.2 Control Delimiters
Even though Standard ML [26] is a lexically-scoped language, raised exceptions are
caught by the latest active handler. Usually, programmers install exception handlers for
the duration of an expression, i.e. the handler is dynamically bound during the extent
of the expression. MacLisp [28] and Common Lisp [43] catch and throw, Eulisp let/cc
[34] are other examples of exception-like control operators with a dynamic extent. More
generally, control delimiters are used to create partial continuations, whose different
semantics tolerate various degrees of dynamicness [5, 20, 31, 38, 42].
2.3 Parallelism and Distribution
Parallelism and distribution are usually considered as a possible mean of increasing the
speed of programs execution. However, another motivation for distribution, exacerbated
by the ubiquitous WWW, is the quest for new resources: a computation has to migrate
from a site s 1 to a another site s 2 , because s 2 holds a resource that is not accessible
from s 1 . For our explanatory purpose, we consider a simple resource which is the name
of a computer. There are several solutions to model the name of the running host in a
language; the last one only is entirely satisfactory.
(i) A lexical variable hostname could be bound to the name of the computer whenever
a process is created. Unfortunately, this variable, which may be closed in a closure,
will always return the same value, even though it is evaluated on a different site.
(ii) A primitive (hostname), defined as a function of its arguments only (by ffi in
[35]), cannot return different values in different contexts, unless it is defined as a non-deterministic
function, which would prevent equational reasoning.
(iii) A special form (hostname) could satisfy our goal, but it is in contradiction with
the minimalist philosophy of Scheme, which avoids adding unnecessary special forms.
Furthermore, as we would have to define such a special form for every resource, it would
be natural to abstract them into a unique special form, parameterised by the resource
name: this introduces a new name space, which is exactly what dynamic binding offers.
(iv) Our solution is to dynamically bind a variable hostname with the name of the
computer at process-creation time. Every occurrence of such a variable would refer to
the latest active binding for the variable.
Besides, control of tasks in a parallel/distributed setting usually relies on a notion of
dynamic extent: sponsors [33, 37] allow the programmer to control hierarchies of tasks.
3 A Calculus of Dynamic Binding
Figure
1 displays the syntax of u , the language accessible to the end user. Let us observe
that the purpose of u is to capture the essence of dynamic variables and not to propose
a new syntax for them.The language u is based on two disjoint sets of variables: the
dynamic and static (or lexical) variables. As a consequence, the programmer can choose
between lexical abstractions -x s :M which lexically bind their parameter when applied,
or dynamic abstractions -x d :M , which dynamically bind their parameter. The former
represent regular abstractions of the -calculus [3], while the latter model constructs
like Common Lisp abstractions with special variables [43], or dynamic-scope [6].
Fig. 1. The User Language u
It is of paramount importance to clearly state the naming conventions that we adopt
for such a language. Following Barendregt [3], we consider terms that are equal up to
the renaming of their bound static variables as equivalent. On the contrary, two terms
that differ by their dynamic variables are not considered as equivalent.
E)
D[[(dlet
Fig. 2. Dynamic-Environment Passing Transform D
In
Figure
2, the dynamic-environment passing translation, which we call D, is a
program transformation that maps programs of u into the target language deps( d ), an
extended call-by-value -calculus based on lexical variables only (Figure 3). Intuitively,
each abstraction (static or dynamic) of u is translated by D into an abstraction taking
an extra dynamic environment in argument; the target language contains a variable e
which denotes an unknown environment. As a result, the application protocol in the
target language is changed accordingly: operator values are applied to pairs. In the
translation of the application, the dynamic environment E is used in the translations
of the operator and operand, and is also passed in argument to the operator. Dynamic
abstractions are translated into abstractions which extend the dynamic environment.
Each dynamic variable is translated into a lookup for the corresponding constant in the
current dynamic environment.
The source language of D extends u with a dlet construct, (dlet ((x d1
which stands for "dynamic let". Such a construct, inaccessible to the programmer, is
used internally by the system to model the bindings of dynamic variables x di to values
. The syntax of the input language, called d , appears in Figure 5. Binding lists are
defined with the concatenation operator x, satisfying the following property.
Vn Vn
Evaluation in the target language is based on the set of axioms displayed in the second
part of Figure 3. Applications of binary abstractions require a double fi v -reduction
as modelled by rule (fi \Theta
environment lookup is implemented by (lk 1 ) and (lk 2 ).
Following Sabry and Felleisen, our purpose in the rest of this Section is to derive
the set of axioms that can perform on terms of d the reductions allowed on terms of
The Language deps( d
E) j (-y:P )P (Term)
e (Unknown Env. Variable)
Axioms:
(lookup xd (extend E xd W
(lookup xd (extend E xd1 W E) if xd1 6= xd (lk 2 )
(j \Theta
Fig. 3. Syntax and Axioms of the deps(-d )-calculus
Fig. 4. The Inverse Dynamic-Environment Passing Transform D \Gamma1
deps( d ). More precisely, we want to define a calculus on d that equationally corresponds
to the calculus on deps( d ). The following definition of equational correspondence
is taken verbatim from [40].
Definition 1 (Equational Correspondence) Let R and G be two languages with
calculi -XR and -XG . Also let f G be a translation from R to G, and h
be a translation from G to R. Finally let G. Then the calculus
-XR equationally corresponds to the calculus -XG if the following four conditions hold:
1.
2. -XG '
Figure
4 contains an inverse dynamic-environment passing transform mapping terms
of deps( d ) into terms of d . The first case is worth explaining: a term (W 1 hE; W 2 i)
represents the application of an operator value W 1 on a pair dynamic environment E and
operand value its inverse translation is the application of the inverse translations
of W 1 and W 2 , in the scope of a dlet with the inverse translation of E. For the following
cases, the inverse translation removes the environment argument added to abstractions,
and translates any occurrence of a dynamic environment into a dlet-expression.
State Space:
(binding list)
Primary Axioms:
(dlet
(dlet
(dlet
(dlet
(dlet
Derived Axioms:
(dlet
Compatibility
(dlet
Fig. 5. Syntax and Axioms of the -d -calculus
If we apply the dynamic-environment passing transform D to a term of d , and
immediately translate the result back to d by D \Gamma1 , we find the first six primary
axioms of Figure 5. For explanatory purpose, we prefer to present the derived axioms
(dlet intro 0 ) and (dlet propagate 0 ). The axiom (dlet intro 0 ) is the counterpart of (fi v )
for dynamic abstraction: applying a dynamic abstraction on a value V creates a dlet-
construct that dynamically binds the parameter to the argument V and that has the
same body as the abstraction. Rule (dlet propagate 0 ), rewritten below using the syntactic
sugar let, tells us how to transform an application appearing inside the scope of a dlet.
(dlet
The operator and the operand can each separately be evaluated inside the scope of the
same dynamic environment, and the application of the operator value on the operand
value also appears inside the scope of the same dynamic environment. The interpretation
of (dlet merge), (dlet elim 1 ), (dlet
We can establish the following properties concerning the composition of D and D
Lemma 2 For any term M 2 d , any value V 2 V alue d , any list of bindings
Bind d , for any environment E 2 deps( d ), let
For any term P 2 deps( d ), any value W 2 deps(V alue d ), any dynamic
environments
by applying the inverse translation D \Gamma1 to each axiom of deps(- d ), we obtain
the four last primary axioms of Figure 5. Rules (lookup 1 ) and (lookup 2 ) are
the immediate correspondent of (lk 1 ) and (lk 2 ) in deps(- d ), while (fi
0\Omega ) and (j v ) were
axioms discovered by Sabry and Felleisen in applying the same technique to calculi for
continuations and assignments [40].
The intuition of the set of axioms of - d can be explained as follows. In the absence of
dynamic abstractions, - d behaves as the call-by-value -calculus. Whenever a dynamic
abstraction is applied, a dlet construct is created. Rule (dlet propagate 0 ) propagates
the dlet to the leaves of the syntax tree, and replaces each occurrence of a dynamic
variable by its value in the dynamic environment by (lookup 1 ) and (lookup 2 ). Rule
(dlet propagate 0 ) also guarantees that the dynamic binding remains accessible during
the extent of the application of the dynamic abstraction, i.e. until it is deleted by (dlet
us also observe here and now that parallel evaluation is possible because
the dynamic environment is duplicated for the operator and the operand, and both can
be reduced independently. This property will be used in Section 8 to define a parallel
evaluation function. We obtain the following soundness and completeness results:
Lemma 4 (Soundness) For any terms M
for any E 2 deps( d ), we have that: deps(- d
Lemma 5 (Completeness) For any terms P
The following Theorem is a consequence of Lemmas 2 to 5.
Theorem 1 The calculus - d equationally corresponds to the calculus deps(- d ). 2
Within the calculus, we can define a partial evaluation relation: the value of a program
M is V if we can prove that M equals V in the calculus.
Definition 6 (eval c ) For any program M 2 0
This definition does not give us an algorithm, but it states the specification that must
be satisfied by any evaluation procedure. The purpose of the next Section is to define
such a procedure.
4 Sequential Evaluation
The sequential evaluation function is defined in Figure 6. It relies on a notion of evaluation
context [11]: an evaluation context E is a term with a "hole", [ ], in place of the
next subterm to evaluate. We use the notation E [M ] to denote the term obtained by
placing M inside the hole of the context E . Four transition rules only are necessary: (dlet
intro) and (dlet elim) are derived from the - d -calculus. Rule (lookup) is a replacement
for (dlet propagate), (dlet merge), (dlet lookup 1 ), and (dlet lookup 2 ) of the - d -calculus.
State Space:
alued ::= xs j (-xs:M) j (-xd:M) (Value)
E) j E) (Evaluation Context)
Transition Rules:
Evaluation Function:
For any program M 2 0
error if M 7!
Dynamically Bound Variables: Stuck Terms:
Fig. 6. Sequential Evaluation Function
Intuitively, the value of a dynamic variable is given by the latest active binding for this
variable. In this framework, the latest active binding corresponds to the innermost dlet
that binds this variable. The dynamic extent of a dlet construct is the period of time
between its apparition by (dlet intro) and its elimination by (dlet elim).
The evaluation algorithm introduces the concept of stuck term, which is defined by
the occurrence of a dynamic variable in an evaluation context that does not contain a
binding for it. The evaluation function is then defined as a total function returning a
value when evaluation terminates, ? when evaluation diverges, or error when a stuck
term is reached.
The correctness of the evaluation function is established by the following Theorem,
which relates eval c and eval d . Let us observe that eval c may return a value V 0 that
differs from the value V returned by eval d because the calculus can perform reductions
inside abstractions.
Theorem 2 For any program M 2 0
d , eval c
If we were to implement (lookup), we would start from the dynamic variable to
be evaluated, and search for the innermost enclosing dlet. If it contained a binding
for the variable, we would return the associated value. Otherwise, we would proceed
with the next enclosing dlet. This behaviour exactly corresponds to the search of a
value in an associative list (assoc in Scheme). Such a strategy is usually referred to
as deep binding . In Section 7, we further refine the sequential evaluation function by
making this associative list explicit. But, beforehand, we show that dynamic binding
adds expressiveness to a functional language.
5 Expressiveness
In Section 2.1, we stated that dynamic binding was an expressive programming technique
that, when used in a sensible manner, could reduce programming patterns in
programs. In this Section, we give a formal justification to this statement, by proving
that dynamic binding adds expressiveness [8] to a purely functional language. First, we
define the notion of observational equivalence.
Definition 7 (Observational Equivalence) Given a programming language L and
an evaluation function eval L , two terms are observationally equivalent ,
context C 2 L, such that C[M 1 ] and C[M 1 are both
programs of L, eval L (M 1 ) is defined and equal to V if and only if eval L (M 2 ) is defined
and equal to V . 2
We shall denote the observational equivalences for the call-by-value -calculus and
for the - d -calculus by - =v and - =d , respectively. In order to prove that dynamic binding
adds expressiveness [8] to a purely functional language, let us consider the following
lambda terms, assuming the existence of a primitive cons to construct pairs.
(cons v (f (-d:v))))
The terms are observationnally equivalent in the - v -calculus, i.e. M 1 - =v M 2 , but
we have that M 1
then
This example shows that dynamic binding enables us to distinguish terms that the
call-by-value -calculus cannot distinguish. As a result, - =v 6ae - =d , and using Felleisen's
definition of expressiveness [8, Thm 3.14], we conclude that:
Proposition 1. v cannot macro-express dynamic binding relative to d .
6 Semantics of Exceptions
First-class continuations and state can simulate exceptions [13]. We show here that
exceptions can be defined in terms of first-class continuations and dynamic binding.
In the semantics of ML [26], a raised exception returns an exceptional value, distinct
from a normal value, which has the effect to prune its evaluation context until
a handler is able to deal with the exception. By merging the mechanism that aborts
the computation and the mechanism that fetches the handler for the exception, the
handler can no longer be executed in the dynamic environment in which the exception
was raised. As a result, such an approach cannot be used to give a semantics to other
kinds of exceptions, like resumable ones [43].
In order to model the abortive effect, we extend the sequential evaluation function
of Figure 6 with Felleisen and Friedman's abort operator A [11]. For the sake of
simplicity, we assume that there exists only one exception type (discrimination on the
kind of exception can be performed in the handler). We also assume the existence of
a distinguished dynamic variable x ed . In Figure 7, we give the semantics of ML-style
exceptions. When an exception is raised, the latest active handler is called, escapes,
and then applies f in the same dynamic environment as handle, and not in the dynamic
environment where the exception was raised 3 .
On the other hand, there exist other kinds of exceptions, like resumable exceptions,
e.g. Common Lisp resumable errors [43], or Eulisp resumable conditions [34]. They
essentially offer the opportunity to resume the computation at the point where the
exception was raised. In the sequel, we present a variant of Queinnec's monitors [36,
3 The usage of a first-class continuation appears here as the rule for handle duplicates the
evaluation context E. Let us also observe that the continuation is only used in a downward
way, which amounts to popping frames from the stack only.
E[(-xed :M) (-v:A E[(f v)])]
Fig. 7. ML-style exceptions
p. 255], which give the essence of resumable exceptions. The primitives monitor/signal
play the role that handler/raise had for ML-style exceptions. Let us note that signal is a
binary function, which takes not only a value, but also a boolean r indicating whether
the exception should be raised as resumable.
E[(monitor f M)] 7!d E [ (-xed :M) (let (old xed
(if r x
E[(signal r V )] 7!d E[(xed r V )]
Fig. 8. Resumable exceptions
Like handle, monitor installs an exception handler for the duration of a computation.
If an exception is signalled, the latest active handler is called in the dynamic environment
of the signalled exception. If an exception is signalled by the handler itself, it will
be handled by the handler that existed before monitor was called: this is why x ed is
shadowed for the duration of the execution of the handler f , but will be again accessible
if the "normal" computation resumes. If the exception was signalled as resumable, i.e.
if the first argument of signal is true, the value returned by the handler is returned by
signal, and computation continues in exactly the same dynamic environment 4 .
This approach to define the semantics of exception has two advantages, at least.
First, as we model each effect by the appropriate primitive (abortion by A and handler
installation by dynamic binding), we have the ability to model different kinds of
semantics for exceptions. Second, defining the semantics of exceptions with assignments
weakens the theory [12] because assignments break some equivalences that would hold
in the presence of exceptions: so, our definition provides a more precise characterisation
of a theory of exceptions.
Refinement
We refine the evaluation function by representing the dynamic environment explicitly by
an associative list. By separating the evaluation context from the dynamic environment,
we facilitate the design of a parallel evaluation function of Section 8.
Figure
9 displays the state space and transition rules of the deep binding strategy.
The dynamic environment is represented in a new dlet construct which can only appear
at the outermost level of a configuration, called state. The list of bindings ffi can be
regarded as a global stack, initially empty when evaluation starts. A binding is pushed
on the binding list, every time a dynamic abstraction is applied, and popped at the
end of the dynamic extent of the application. In Section 4, the dlet construct was also
modelling the dynamic extent of a dynamic-abstraction application; now that the dlet
construct no longer appears inside terms, we introduce a (pop M) term playing the
same role: it is created when a dynamic abstraction is applied and is destroyed at the
end of the dynamic extent, after popping the top binding of the binding list. Theorem
3 establishes the correctness of the deep binding strategy.
4 Such a semantics assumes that there exists an initial handler in which evaluation can proceed.
State Space:
(Binding list)
E) j E) (Evaluation Context)
Transition Rules:
(dlet
(dlet
(dlet
(dlet ffix((x d V
Evaluation Function:
db (dlet
error if (dlet () M) 7!
Stuck State: S 2 Stuck( db
Fig. 9. Deep Binding
Theorem 3 eval
The deep binding technique is simple to implement: bindings are pushed on the
binding list ffi at application time of dynamic abstractions and popped at the end of
their extent. However, the lookup operation is inefficient because it requires searching
the dynamic list, which is an operation linear in its length.
There exist some techniques to improve the lookup operation. The shallow binding
technique consists in indexing the dynamic environment by the variable names [1]. A
further optimisation, called shallow binding with value cell is to associate each dynamic
variable with a fixed location which contains the correct binding for that variable: the
lookup operation then simply requires to read the content of that location.
8 Parallel Evaluation
In Section 3, we observed that the axiom (dlet propagate 0 ) was particularly suitable for
parallel evaluation because it allowed the independent evaluation of the operator and
operand by duplicating the dynamic environment. It is well-known that the deep binding
strategy is adapted to parallel evaluation because the associative list representing the
dynamic environment can be shared between different tasks.
As in our previous work [30], we follow the "parallelism by annotation" approach,
where the programmer uses an annotation future [17] to indicate which expressions may
be evaluated in parallel. The semantics of future has been described in the purely functional
framework [14] and in the presence of first-class continuations and assignments
[30]. In
Figure
10, we present the semantics of future in the presence of dynamic binding.
As in [14, 30], the set of terms is augmented with a future construct, and we add to
the set of values a placeholder variable, "which represents the result of a computation
that is in progress". In addition, a new construct (f-let (p M) S) has a double goal: first
as a let, it binds p to the value of M in S; second, it models the potential evaluation of
S in parallel with M . The component M is the mandatory term because it is the first
that would be evaluated if evaluation was sequential, while S is speculative because its
value is not known to be needed before M terminates.
State Space:
Transition Rules:
(dlet
ae
(dlet
(dlet
(dlet
(dlet
ae
(dlet
(dlet
(dlet ffix((x d V
(dlet
error (error)
(dlet
(dlet
(dlet
(dlet
(dlet
Evaluation Function: For any program M 2 0
(dlet
IN such that
(dlet
error if (dlet () M) 7!
error
Fig. 10. Parallel Evaluation (differences with Figure
It is important to observe that (future [ ]) is not a valid evaluation context. Otherwise,
if evaluation was allowed to proceed inside the future body, it could possibly change the
dynamic environment, which would make (fork) unsound. Instead, rule (ltc), which
stands for lazy task creation [27, 7], replaces a (future M) expression by (fmark
which should be interpreted as a mark indicating that a task may be created.
If the runtime elects to create a new task, (fork) creates a f-let expression, whose
mandatory component is the argument of fmark, i.e. the future argument, and whose
speculative component is a new state evaluating the context of fmark filled with the
placeholder variable, in the scope of the duplicated dynamic environment ffi 1 . If the run-time
does not elect to spawn a new task, evaluation can proceed in the fmark argument.
Rules (ltc) and (future id) specify the sequential behaviour of future: the value of
future is the value of fmark, which is the value of its argument.
When the evaluation of the mandatory component terminates, rule (join) substitutes
the value of the placeholder in the speculative state. Rule (speculative) indicates that
speculative transitions are allowed in the f-let body.
Following [14], Figure 10 defines a relation S 1 7! n;m
meaning that n steps are
involved in the reduction from S 1 to S 2 , among which m are mandatory.
The correctness of the evaluation function follows from a modified diamond property
and by the observation that the number of pop terms in a state is always smaller or
equal to the length of the dynamic environment.
Theorem 4 eval
As far as implementation is concerned, rule (ltc) seems to indicate that the dynamic
environment should be duplicated. A further refinement of the system indicates that it
suffices to duplicate a pointer to the associative list, as long as the list remains accessible
in a shared store.
Rule (ltc) adds an overhead to every use of future, by duplicating the dynamic environment
even if dynamic variables are not used. Feeley [7] describes an implementation
that avoids this cost by lazily recreating a dynamic environment when a task is stolen.
Due to the orthogonality between assignments and dynamic binding, our previous
results [30] with assignments can be merged within this framework. Adding assignments
permits the definition of mutable dynamic variables (with a construct like dynamic-set!
[34]). Due to the purely dynamic nature of the semantics, the presence of mutable
dynamic variables offers less parallelism as observed in [30]. The interaction of dynamic
binding and continuations is however beyond the scope of this paper [19].
9 Related Work
In the conference on the History of Programming Languages, McCarthy [25] relates
that they observed the behaviour of dynamic binding on a program with higher-order
functions. The bug was fixed by introducing the funarg device and the function con-
struct[32].
Cartwright [4] presents an equational theory of dynamic binding, but his language
is extended with explicit substitutions and assumes a call-by-name parameter passing
technique. The motivation of his work fundamentally differs from ours: his goal is to
derive a homomorphic model of functional languages by considering - as a combinator.
His axioms are derived from the -oe-calculus axioms, while ours are constructed during
the proof of equational correspondence of the calculus.
The authors of [6] discuss the issue of tail-recursion in the presence of dynamic
binding. They observe that simple implementations of fluid-let [18] are not tail-recursive
because they restore the previous dynamic environment after evaluating the
fluid-let body. Therefore, they propose an implementation strategy, which in essence
is a dynamic-environment passing style solution. Programs in dynamic-environment
passing style are characterised by the fact that they do not require a growth of the
control state for dynamic binding; however, they require a growth of the heap space.
An analogy is the continuation-passing translation, which generates a program where all
function calls are in terminal position although it does not mean that all cps-programs
are iterative. Feeley [7] and Queinnec [36] observe that programs in dynamic-environ-
ment passing style reserve a special register for the current dynamic environment. Since
every non-terminal call saves and then restores this register, such a strategy penalises
programs that do not use dynamic binding, especially in byte-code interpreters where
the marginal cost of an extra register is very high. Both of them prefer a solution that
does not penalise all programs, at the price of a growth of the control state for every
dynamic binding. Consequently, we believe that implementors have to decide whether
dynamic binding should or not increase the control state; in any case, it will result in a
non-iterative behaviour.
In the tradition of the syntactic theories for continuations and assignments, we present
a syntactic theory of dynamic binding. This theory helps us in deriving a sequential
evaluation function and a refined implementation like deep binding. We also integrate
dynamic-binding constructs into our framework for parallel evaluation of future-based
programs.
Besides, we prove that dynamic binding adds expressiveness to purely functional
language and we show that dynamic binding is a suitable tool to define the semantics
of exceptions-like notions. Furthermore, we believe that a single framework integrating
continuations, side-effects, and dynamic binding would help us in proving implementation
strategies of fluid-let in the presence of continuations [19].
Acknowledgement
Many thanks to Daniel Ribbens, Christian Queinnec, and the anonymous referees for
their helpful comments.
--R
Anatomy of Lisp.
Shallow binding in lisp 1.5.
The Lambda Calculus: Its Syntax and Semantics
Lambda: the Ultimate Combinator.
Abstracting Control.
Dynamic Identifiers Can Be Neat.
An Efficient and General Implementation of Futures on Large Scale Shared-Memory Multiprocessors
On the Expressive Power of Programming Languages.
A Reduction Semantics for Imperative Higher-Order Languages
A Syntactic Theory of Sequential State.
A Syntactic Theory of Sequential Control.
The Revised Report on the Syntactic Theories of Sequential Control and State.
Controlling Effects.
The Semantics of Future and Its Use in Program Optimization.
Operational Reasoning and Denotational Semantics.
Towards a Semantic Theory of Dynamic Binding.
MIT Scheme Reference Manual.
Embedding Continuations in Procedural Objects.
Continuations and Concurrency.
Report on the Programming Language Haskell.
The
GNU Emacs Lisp Reference Manual
Recursive Functions of Symbolic Expressions and Their Computation by Machine
History of Lisp.
The Definition of Standard ML.
Lazy Task Creation
Maclisp reference manual.
Sound Evaluation of Parallel Functional Programs with First-Class Contin- uations
The Semantics of Scheme with Future.
Partial Continuations as the Difference of Continu- ations
The function of function in lisp or why the funarg problem should be called the environment problem.
Speculative Computation in Multilisp.
The Eulisp Definition
Lisp in Small Pieces.
Design of a Concurrent and Distributed Lan- guage
A Dynamic Extent Control Operator for Partial Continuations.
Revised 4 Report on the Algorithmic Language Scheme.
The Formal Relationship between Direct and Continuation-Passing Style Optimizing Compilers: a Synthesis of Two Paradigms
Reasoning about Programs in Continuation-Passing Style
Control Delimiters and Their Hierarchies.
The Language.
Rum : an Intensional Theory of Function and Control Abstractions.
Programming Perl.
--TR
--CTR
Christian Queinnec, The influence of browsers on evaluators or, continuations to program web servers, ACM SIGPLAN Notices, v.35 n.9, p.23-33, Sept. 2000
Matthias Neubauer , Michael Sperber, Down with Emacs Lisp: dynamic scope analysis, ACM SIGPLAN Notices, v.36 n.10, October 2001
Gavin Bierman , Michael Hicks , Peter Sewell , Gareth Stoyle , Keith Wansbrough, Dynamic rebinding for marshalling and update, with destruct-time ?, ACM SIGPLAN Notices, v.38 n.9, p.99-110, September
Zena M. Ariola , Hugo Herbelin , Amr Sabry, A type-theoretic foundation of continuations and prompts, ACM SIGPLAN Notices, v.39 n.9, September 2004
Oleg Kiselyov , Chung-chieh Shan , Amr Sabry, Delimited dynamic binding, ACM SIGPLAN Notices, v.41 n.9, September 2006
Magorzata Biernacka , Olivier Danvy, A syntactic correspondence between context-sensitive calculi and abstract machines, Theoretical Computer Science, v.375 n.1-3, p.76-108, May, 2007 | dynamic binding and extent;parallelism;functional programming;syntactic theories |
609199 | Continuation-Based Multiprocessing. | Any multiprocessing facility must include three features: elementary exclusion, data protection, and process saving. While elementary exclusion must rest on some hardware facility (e.g., a test-and-set instruction), the other two requirements are fulfilled by features already present in applicative languages. Data protection may be obtained through the use of procedures (closures or funargs), and process saving may be obtained through the use of the catch operator. The use of catch, in particular, allows an elegant treatment of process saving.We demonstrate these techniques by writing the kernel and some modules for a multiprocessing system. The kernel is very small. Many functions which one would normally expect to find inside the kernel are completely decentralized. We consider the implementation of other schedulers, interrupts, and the implications of these ideas for language design. | Introduction
In the past few years, researchers have made progress in understanding the mechanisms
needed for a well-structured multi-processing facility. There seems to be
universal agreement that the following three features are needed:
1. Elementary exclusion
2. Process saving
3. Data protection
By elementary exclusion, we mean some device to prevent processors from interfering
with each other's access to shared resources. Typically, such an elementary
exclusion may be programmed using a test and set instruction to create a critical
region. Such critical regions, however, are not by themselves adequate to describe
the kinds of sharing which one wants for controlling more complex resources such
as disks or regions in highly structured data bases. In these cases, one uses an
elementary exclusion to control access to a resource manager (e.g., a monitor [11]
or serializer [1]), which in turn regulates access to the resource.
Unfortunately, access to the manager may then become a system bottleneck. The
standard way to alleviate this is to have the manager save the state of processes
which it wishes to delay. The manager then acts by taking a request, considering
the state of the resource, and either allowing the requesting program to continue
* Research reported herein was supported in part by the National Science Foundation under
grant numbers MCS75-06678A01 and MCS79-04183. This paper originally appeared in J. Allen,
editor, Conference Record of the 1980 LISP Conference, pages 19-28, Palo Alto, CA, 1980. The
Company, Republished by ACM.
* Current address: College of Computer Science, Northeastern University, 360 Huntington Av-
enue, 161CN Boston, MA 02115, USA.
or delaying it on some queue. In this picture, the manager itself does very little
computing, and so becomes less of a bottleneck.
To implement this kind of manager, one needs some kind of mechanism for saving
the state of the process making a request.
The basic observation of this paper is that such a mechanism already exists in
the literature of applicative languages: the catch operator [14], [15], [21], [25]. This
operator allows us to write code for process-saving procedures with little or no fuss.
This leaves the third problem: protecting private data. It would do no good to
monitors if a user could bypass the manager and blithely get to the
resource. The standard solution is to introduce a class mechanism to implement
protected data. In an applicative language, data may be protected by making it
local to a procedure (closure). This idea was exploited in [19], but has been unjustly
neglected. We revive it and show how it gives an elegant solution to this problem.
We will demonstrate our solution by writing the kernel and some modules for a
multiprocessing system. The kernel is very small. Many functions which one would
normally expect to find inside the kernel, such as semaphore management [2], may
be completely decentralized because of the use of catch. Our system thus answers
one of the questions of [3] by providing a way to drastically decrease the size of the
kernel. We have implemented the system presented here (in slightly different form)
using the Indiana SCHEME 3.1 system [26].
The remainder of this paper proceeds as follows: In Section 2, we discuss our
assumptions about the system under which our code will run. Sections 3 and
4 show how we implement classes and process-saving, respectively. In Section 5
we bring these ideas together to write the kernel of a multiprocessing system. In
Sections 6 and 7 we utilize this kernel to write some scheduling modules for our
system In Section 8, we show how to treat interrupts. Last, in Section 9, we consider
the implications of this work for applicative languages.
2. The Model of Computation
Our fundamental model is that of a multiprocessor, multiprocess system using
shared memory. That is, we have many segments of code, called processes, which
reside in a single shared random-access memory. The extent to which processes
actually share memory is to be controlled by software. We have several active units
called processors which can execute processes. Several processors may be executing
the same process simultaneously. We make the usual assumption that memory
access marks the finest grain of interleaving; that is, two processors may not access
(read or write) the sane word in memory at the same time. This elementary memory
exclusion is enforced by the memory hardware.
At the interface between the processes and the processors is a distinguished process
called the kernel. The kernel's job is to assign processes which are ready to
run to processors which are idle. In a conventional system, e.g., [22], this entails
keeping track of many things. We shall see that the kernel need only keep track of
ready processes.
It may be worthwhile to discuss the author's SCHEME 3.1 system, which provided
the context for this work. SCHEME is an applicative-order, lexically-scoped,
full-funarg dialect of LISP [25]. The SCHEME 3.1 system at Indiana University
translates input Scheme code into the code for a suitable multistack machine. The
machine is implemented in LISP. Thus, we were under the constraint that we could
no LISP code, since such an addition would constitute a modification to the
machine. The system simulates a multiprocessor system by means of interrupts,
using a protocol to be discussed in Section 8. However, all primitive operations,
including the application of LISP functions, are uninterruptible. This allows us to
write an uninterruptible test-and-set operation, such as
(de test-and-set-car (x)
(prog2 nil (car x) (rplaca x nil)))
which returns the car of its argument and sets the car to nil.
Two other features of SCHEME are worth mentioning. First, SCHEME uses
call-by-value to pass parameters. That means that after an actual parameter is
evaluated, a new cons-cell is allocated, and in this new cell a pointer to the evaluated
actual parameter is planted. (In the usual association list implementation, the
pointer is in the cdr field; in the rib-cage implementation [24], it is in the car.) This
pointer may be changed by the use of the asetq procedure. Thus, if we write
(define scheme-demo-1 (x)
(asetq x
any call to SCHEME-DEMO-1 always returns 3, since the second asetq changes a different
cell from the first one. (This feature of applicative languages has always been rather
obscure. See [17] Sec. 1.8.5 for an illuminating discussion.) The second property
on which we depend is that the "stack" is actually allocated from the LISP heap
using cons and is reclaimed using the garbage collector. This allows us to be quite
our coding techniques. We shall have more to say about this assumption in
our conclusions.
3. Implementing classes
For us, the primary purpose of the class construct is to provide a locus for the
retention of private information. In Simula [6], a class instance is an activation
record which can survive its caller. In an applicative language, such a record may
be constructed in the environment (association list) of a closure (funarg). This idea
is stated clearly in [23]; we discuss it briefly here for completeness.
For example, a simple cons-cell may be modelled by:
(define cons-cell (x y)
(lambda (msg)
(cond
(lambda (val) (asetq x val)))
(lambda (val) (asetq y val)))
(define car (x) (x 'car))
(define cdr (x) (x 'cdr))
(define rplaca (x v) ((x 'rplaca) v))
(define rplacd (x v) ((x 'rplacd) v))
Here a cons cell is a function which expects a single argument; depending on the
type of argument received, the cell returns or changes either of its components.
(We have arbitrarily chosen one of the several ways to do this). Such behaviorally
defined data structures are discussed in [10], [20], [23]; at least one similar object
was known to Church [5], cited in [24].
Another example, important for our purposes, is
(define busy-wait ()
(let ((x (cons t nil)))
(labels
((self (lambda (msg)
(cond
(if (test-and-set-car x)
(car (rplaca x t)))
busy-wait is a function of no arguments, which, when called, creates a new locus of
busy-waiting. It does this by creating a function with a new private variable x. (This
x is guaranteed new because of the use of call-by-value). This returned function
(here denoted self) expects a single argument, either P or V. Calling it with P sends
it into a test-and-set loop, and calling it with V resets the car of x to t, thus releasing
the semaphore. There is no way to access the variable x except through calls on
this function. Note that we are not advocating busy-waiting (except perhaps in
certain very special circumstances). Any use of busy-wait in the rest of this paper
may be safely replaced with any hardware-supported elementary exclusion device
which the reader may prefer. Our concern is how to build complex schedulers from
these elementary exclusions. In particular, we shall consider better ways to build a
semaphore in Section 6.
4. Process saving with catch
catch is an old addition to applicative languages. The oldest version known to
the author is Landin's, who called it either ``pp'' (for ``program point'') [15] or
"J-lambda" [14]. 1 Reynolds [21] called it "escape." A somewhat restricted form of
catch exists in LISP 1.5, as errset [16]; another version is found in MACLISP, as
the pair catch and throw. The form we have adopted is Steele and Sussman's [25],
which is similar to Reynolds'.
In SCHEME, catch is a binding operator. Evaluation of the expression (catch id
causes the identifier id to be bound (using call-by-value) to a "continuation
object" which will be described shortly. The expression expr is then evaluated in
this extended environment.
The continuation object is a function of one argument which, when invoked,
returns control to the caller of the catch expression. Control then proceeds as if
the catch expression had returned with the supplied argument as its value. This
corresponds to the notion of an "expression continuation" in denotational semantics.
To understand the use of catch, we may consider some examples.
(catch m (cons (m
returns 3; when the (m 3) is evaluated, it is as if the entire catch expression returned
3. The form in which we usually will use catch is similar. In
(define foo
evaluation of (m -junk-) causes the function foo to return to its caller with the
value of -junk-.
The power of catch arises when we store the value of m and invoke it from some
other point in the program. In that case, the caller of foo is restarted with m's argu-
ment. The portion of the program which called m is lost, unless it has been preserved
with a strategically placed catch. A small instance of this phenomenon happened
even in our first example-there, m's caller was the cons which was abandoned.
Calling a continuation function is thus much like jumping into hyperspace-one
loses track entirely of one's current context, only to re-emerge in the context that
set the continuation.
There will actually be very few occurrences of catch in the code we write. For
the remainder of this section and the next, we shall consider what things we can
do with continuations which have already been created by catch. When we get to
Section 6, we shall start to use catch in our code.
We shall use continuations to represent processes. A process is a self-contained
computation. We may represent a process as a pair consisting of a continuation
and an argument to be sent to that continuation. This corresponds to the notion
of "command continuation" in denotational semantics.
(define cons-process (cont arg)
(lambda (msg)
(cond
Here we have defined a process as a class instance with two components, a continuation
and an argument, and a single operation, run-it, which causes the continuation
to be applied to the argument, thus starting the process. Because cont is a contin-
uation, applying it causes control to revert to the place to which it refers, and the
caller of (x 'run-it) is lost. This is not so terrible, since the caller of (x 'run-it)
may have been saved as a continuation someplace else.
5. The Kernel
We now have enough machinery to write the kernel of our operating system. The
kernel's job is to keep track of those processes which are ready to run, and to
assign a process to any processor which asks for one. The kernel is therefore a
class instance which keeps a queue of processes and has two operations: one to add
a process to the ready queue and one to assign a process to a processor (thereby
deleting it from the ready queue).
We shall need to do some queue manipulation. We therefore assume that we have
a function (create-queue) which creates an empty queue, a function (addq q x)
which has the side effect of adding the value of x to the queue q, and the function
(deleteq q), which returns the top element of the queue q, with the side-effect of
deleting it from q.
We may now write the code for the kernel:
(define gen-kernel ()
(let ((ready-queue (create-queue))
(mutex (busy-wait)))
(lambda (msg)
(cond
(lambda (cont arg)
(block
(mutex 'P)
(addq ready-queue (cons-process cont arg))
(mutex
(mutex 'P)
(let ((next-process (deleteq ready-queue)))
(block
(mutex 'V)
(next-process 'run-it))))))))
(asetq kernel (gen-kernel))
(define made-ready (cont arg)
((kernel 'make-ready) cont arg))
(define dispatch () (kernel 'dispatch))
We have now defined the two basic functions, make-ready and dispatch. The
call (make-ready cont arg) puts a process, built from cont and arg, on the ready
queue. To do this, it must get past a short busy-wait. (This busy-wait is always
short because the kernel is never tied up for very long. This construction is also in
keeping with the idea of building complex exclusion mechanisms from very simple
ones.) It then puts the process on the queue, releases the kernel's exclusion, and
exits. (Given the code for busy-wait above, it always returns t. The value returned
must not be a pointer to any private data).
dispatch is subtler. A processor will execute (dispatch) whenever it decides it has
nothing better to do. Normally, a call to dispatch would be preceded by a call to
make-ready, but this need not be the case. After passing through the semaphore, the
next waiting process is deleted from the ready queue and assigned to next-process.
A (mutex 'V) is executed, and the next-process is started by sending it a run-it
signal.
The subtlety is in the order of these last two operations. They cannot be reversed,
since once next-process is started, there would be no way to reset the semaphore.
The given order is safe however, because of the use of call-by-value. Every call
on (dispatch) uses a different memory word for next-process. Therefore, the call
(next-process 'run-it) uses no shared data and may be executed outside the critical
region.
few explanatory words on the code itself are in order. First, note that (kernel
'make-ready) returns a function which takes two arguments and performs the required
actions. (kernel 'dispatch), however, performs its actions directly. We
could have made (kernel 'dispatch) return a function of no arguments, but we
judged that to be more confusing than the asymmetry. Second, block is SCHEME's
sequencing construct, analogous to progn. Also, cond uses the so-called "generalized
cond," with an implicit block (or progn) on the right-hand-side of each alternative.)
6. Two Better Semaphores
Our function busy-wait would be an adequate implementation of a binary semaphore
if one was sure that the semaphore was never closed for very long. In this section,
we shall write code for two better implementations of semaphores.
For our first implementation, we use the kernel to provide an alternative to the
test-and-set loop. If the test-and-set fails, we throw the remainder of the current
process on the ready queue, and execute a DISPATCH. This is sometimes called a
"spin lock."
(define spin-lock-semaphore ()
(let ((x (cons t nil)))
(labels
((self (lambda (msg)
(cond
(cond
((test-and-set-car
(car (rplaca x t)))
(define give-up-and-try-later ()
(catch caller
(block
(make-ready caller t)
Here, the key function is give-up-and-try-later. It puts on the ready-queue a
process consisting of its caller and the argument t. It then calls dispatch, which
switches the processor executing it to some ready process. When the enqueued
process is restarted (by some processor executing a dispatch), it will appear that
give-up-and-try-later has quietly returned t. The effect is to execute a delay of
unknown duration, depending on the state of the ready queue. Thus a process
executing a P on this semaphore will knock on the test-and-set cell once; if it is
closed, the process will go to sleep for a while and try again later.
While this example illustrates the use of catch and make-ready, it is probably not
a very good implementation of a semaphore. A better implementation (closer to
the standard one) would maintain a queue of processes waiting on each semaphore.
A process which needs to be delayed when it tries a P will be stored on this queue.
When a V is executed, a waiting process may be restarted, or, more precisely, placed
on the ready queue. We code this as follows:
(define semaphore ()
(let ((q (create-queue)) ; a queue for waiting processes
(count
(mutex (busy-wait)))
(lambda (msg)
(cond
(mutex 'P)
(catch caller
(block
(cond
((greaterp count
(asetq count (sub1 count))
(mutex 'V)
(mutex 'V)
(mutex 'P)
(if (emptyq q)
(asetq count (add1 count))
(make-ready (deleteq q) t))
(mutex 'V)
Executing (semaphore) creates a class instance with a queue q, used to hold processes
waiting on this semaphore, an integer count, which is the traditional "value"
of the semaphore, and a busy-wait locus mutex. mutex is used to control access
to the scheduling code, and is always reopened after a process passes through the
semaphore. As was suggested in the introduction, this use of a small busy-wait to
control entrance to a more sophisticated scheduler is typical.
When a P is executed, the calling process first must get past mutex into the critical
region. In the critical region, the count is checked. If it is greater than 0, it is
decremented, the mutex exclusion is released, and the semaphore returns a value of
T to its caller. If the count is zero, the continuation corresponding to the caller of
the semaphore is stored on the queue. mutex is released, and the processor executes
a (dispatch) to find some other process to work on.
When a V is executed, the calling process first gets past mutex into the semaphore's
critical region. The queue is checked to see if there are any processes waiting on this
semaphore. If there are none, the count is incremented. If there is at least one, it is
deleted from the queue by (deleteq q), and put on the kernel's ready queue with
argument t. When it is restarted by the kernel, it will think it has just completed
its call on P. (Since a P always returns t, the second argument to make-ready must
likewise be a t). After this bookkeeping is accomplished, mutex is released and the
call on V returns t.
All of this is just what a typical implementation of semaphores (e.g., [2]) does.
The difference is that our semaphore is an independent object which lies outside
the kernel. It is in no way privileged code.
We have also written code to implement more complex schedulers. The most
complex scheduler for which we have actually written code is for Brinch Hansen's
"process" [4]. We have written this as a SCHEME syntactic macro. The code is
only about a page long.
7. Doing more than one thing at once
We now turn to the important issue of process creation. Although the semaphores
in the previous section used catch to save the state of the current process, they
did not provide any means to increase the number of processes in the system. We
may do this with the function create-process. create-process takes one argument,
which is a function of no arguments, and creates a process which will execute this
function in "parallel" with the caller of create-process.
(define create-process (fn)
(catch caller
(block
(catch process
(block
(make-ready process t)
(caller
When create-process is called with fn, it first creates a continuation containing
its caller and calls it caller. It enters the block, and creates a continuation called
process, which, when started, will continue execution of the block with (fn). This
continuation process is then put on the kernel's ready queue (with argument t,
which will be ignored when process is restarted). Then (caller t) is executed,
which causes create-process to return to its caller with value t.
Thus, the process which called create-process continues in control of its pro-
cessor, but process is put onto the ready queue. When the kernel decides to run
process, (fn) will be executed. The processor which runs process will then do a
(dispatch) to find something else to do.
(The reader who finds this code tricky may take some comfort in our opinion that
this is the trickiest piece of code in this paper. The difficulty lies in the fact that
its execution sequence is almost exactly reversed from its lexical sequence [8].)
We can use create-process to implement a fork-join. The function fork takes
two functions of no arguments. Its result is to be the cons of their values. The
execution of the two functions is to proceed as two independent processes, and the
process which called fork is to be delayed until they both return.
(define fork (fn1 fn2)
(catch caller
(let ((one-done? nil)
(ans1 nil)
(ans2 nil)
(mutex (busy-wait)))
(let ((check-done
(lambda (dummy)
(block
(mutex 'P)
(if one-done?
(make-ready caller
(cons ans1 ans2))
(asetq one-done? t))
(mutex 'V) ))))
(block
(create-process
(lambda
(check-done (asetq ans1 (fn1)))))
(create-process
(lambda
(check-done (asetq ans2 (fn2)))))
fork sets up four locals: one for each of the two answers, a flag called one-done?,
and a semaphore to control access to the flag. It creates the two daughter processes
and then dispatches, having saved its caller in the continuation caller. Each of
the two processes computes its answer, deposits it in the appropriate local variable,
and calls check-done. check-done uses mutex to obtain access to the flag one-done?,
which is initially nil. If its value is nil, then it is set to T. If its value is T, signifying
that the current call to check-done is the second one, then caller is moved to the
ready queue with argument (cons ans1 ans2).
8. Interrupts
What we have written so far is quite adequate for a non-preemptive scheduling
system [2]. If we wish to use a pre-emptive scheduling system (as we must if we
wish to use a single processor), then we must consider the handling of interrupts.
We shall consider only the problem of pre-emption of processes through timing
interrupts as non-preempting interrupts can be handled through methods analogous
to those in [3], [27].
We model a timing interrupt as follows: When a processor detects a timing
interrupt, the next identifier encountered in the course of its computation (say X)
will be executed as if it had been replaced by (preempt X). preempt is the name of the
interrupt-handling routine. If we believe, with [23], that a function application is
just a GO-TO with binding, then this model is quite close to the conventional model,
in which an interrupt causes control to pass to a predefined value of the program
counter. A very similar treatment of interrupts was developed independently for
use in the MIT/Xerox PARC SCHEME chip [12].
The simplest interrupt handler is:
(define preempt (x)
(catch caller
(block
(make-ready caller x)
With this interrupt handler, the process which the processor is executing is thrown
back on the ready queue, and the processor executes a dispatch to find something
else to do.
A complication that arises with pre-emptive scheduling is that interrupts must be
inhibited inside the kernel. This may be accomplished by changing the busy-wait
in the kernel to kernel-exclusion:
(define kernel-exclusion ()
(let ((sem (busy-wait)))
(lambda (msg)
(cond
(sem 'P)
(sem 'V)
Note the order of the operations for V. The reverse order is wrong; an interrupt
might occur after the enable-preemption but before the (sem 'V), causing instant
deadlock. (We discovered this the hard way!)
Now, for the first time, we have introduced some operations which probably
should be privileged: disable-preemption and enable-preemption. 2 We can make
those privileged without changing the architecture of the machine by introducing
a read-loop like:
(define user-read-loop ()
(let ((disable-preemption
(lambda () (error 'protection-error)))
(enable-preemption
(lambda () (error 'protection-error))))
(labels
(lambda (dummy)
(loop (print (eval (read)))))))
(loop
This is intended to suggest the user's input is evaluated in an environment in
which disable-preemption and enable-preemption are bound to error-creating func-
tions. This is not actually the way the code is written in SCHEME, but we have
written it in this way to avoid dealing with the complications of SCHEME's version
of eval.
9. Conclusions and Issues
In this paper, we have shown how many of the most troublesome portions of the
"back end" of operating systems may be written simply using an applicative language
with catch. In the course of doing so, we have drawn some conclusions In
three categories: operating system kernel design, applicative languages, and language
design in general.
For operating systems, this work answers in part Brinch Hansen's call to simplify
the kernel [3]. Because all of the scheduling apparatus except the ready queue
has been moved out of the kernel, the kernel becomes smaller, is called less often,
and therefore becomes less of a bottleneck. By passing messages to class instances
(functions) instead of passing them between processes, we avoid the need for individuation
of processes, and thereby avoid the need to maintain process tables, etc.,
further reducing the size of the kernel.
This is not meant to imply that we have solved all the problems associated with
system kernels. Problems of storage allocation and performance are not addressed.
In the areas of process saving and protection, however, the approach discussed here
seems to offer considerable advantages.
In the area of applicative languages, our work seems to address the issue of "state."
A module is said to have "state" if different calls on that module with identical
arguments may give different results at different times in the computation. Another
way of describing this phenomenon is that the model is "history-dependent." (This
is not to be confused with issues of non-determinism). If an object does not have
state, then it should never matter whether two processes are dealing with the same
object or with two copies of it. For processes to communicate, however, they
must be talking to the same module, not just to two copies of it. For instance,
all modules must communicate with the same kernel, not just with two or more
modules produced by calls on gen-kernel. Therefore, the kernel and similar modules
must have state-they must have uses of asetq in their code.
This seems to us to be an important observation. It means that we must come
to grips with the concept of the state if we are to deal with the semantics of
parallelism. This observation could not have been made in the context of imperative
languages, where every module has state. Only in an applicative context, where
we can distinguish true state from binding (or internal state), could we make this
distinction. 3
A related issue is the use of call-by-value. A detailed semantics of SCHEME,
incorporating the Algol call-by-value mechanism, would give an unambiguous account
of when two modules were the "same," and thus also give an account of
when two modules share the same state. Such an account is necessary to explain
the use of asetq in our programs and to determine which data is private and which
is shared (as in the last lines of (kernel 'dispatch)). In such a description, we
would find that restarting a continuation restores the environment (which is a map
from identifiers to L-values), but does not undo changes in the global state (the
map from L-values to R-values) which is altered by asetq. Nonetheless, we find
this account unsatisfying, because its systematic introduction of a global state at
every procedure call seems quite at odds with the usual state-free picture of an applicative
program. We find it unpleasant to say that we pass parameters by worth
(i.e., without copying), except when we need to think harder about the program.
In this regard, we commend to applicative meta-programmers a closer study of
denotational semantics. Descriptive denotational semantics, as expounded in Chapter
1 of [17] or in [9], provides the tools to give an accurate description of what
actually happens when a parameter is passed. There are, however, some measures
which would help alleviate the confusion. For example, we could use a primitive
cell operation in place of the unrestricted use of asetq. Then all values could be
passed by worth (R-value); L-values would arise only as denotations of cells, and
explicit dereferencing would be required. Such an approach is taken, in various
degrees, in PLASMA [10], FORTH [13], and BLISS [28]. 4 Also, John Reynolds and
one of his students are investigating semantics which do not rely on a single global
state [personal communication]. 5
Last, we essay some ideas about the language design process. Our choice to work
in the area of applicative languages was motivated in part by Minsky's call for the
separation of syntax from semantics in programming [18]. We have attempted to
home in on the essential semantic ideas in multiprogramming. By "semantic" we
do not simply mean those ideas which are expressible in denotational semantics,
though surely the use of denotational semantics has exposed and simplified the basic
ideas in programming in general. We add to these ideas some basic operational
knowledge about how one goes from semantics to implementations (e.g., [21]) and
some additional operational knowledge not expressed in the "formal semantics" at
all, e.g., our treatment of interrupts.
Only after we have a firm grasp on these informal semantic ideas should we begin
to consider syntax. Some syntax is for human engineering-replacing parentheses
and positional structure with grammars and keywords. Other syntax may be introduced
to restrict the class of run-time structures which are needed to support the
language. The design of RUSSELL [7] is a good example of this paradigm. One
spectacular success which may be claimed for this approach is that of PASCAL,
which took the well-understood semantics of ALGOL and introduced syntactic restrictions
which considerably simplified the run-time structure.
In our case, we should consider syntactic restrictions which will allow the use of
sequential structures to avoid spending all one's time garbage-collecting the stack.
Other clever data structures for the run-time stack should also be considered. Another
syntactic restriction which might be desirable is one which would prevent a
continuation from being restarted more than once.
Any language or language proposal must embody a trade-off between generality
(sometimes called "functionality") and efficiency. By considering complete generality
first, we may more readily see where the trade-offs may occur, and what is
lost thereby. Unfortunately, the more typical approach to language design is to
start with a given run-time structure (or, worse yet, a syntactic proposal). When
the authors realize that some functionality is lacking, they add it by introducing a
patch. By introducing the generality and cleanness first, and then compromising
for efficiency, one seems more likely to produce clean, small, understandable, and
even efficient languages.
Notes
1. Though catch and call/cc are clearly interdefinable, J and call/cc differ importantly in
details; see Hayo Thielecke, "An Introduction to Landin's 'A Generalization of Jumps and
Labels','' Higher-Order and Symbolic Computation, 11(2), pp. 117-123, December 1998.
2. These were additional primitives that were added to the Scheme 3.1 interpreter.
3. This paragraph grew out of conversations I had had with Carl Hewitt over the nature of object
identity. I had objected that Hewitt's notion of object identity in a distributed system required
some notion of global state (C. Hewitt and H. G. Baker, Actors and Continuous Functionals,
in E. J. Neuhold (ed.) Formal Descriptions of Programming Concepts, pages 367-390. North
Holland, Amsterdam, 1978; at page 388). This is an issue that remains of interest in the
generation of globally-unique identifiers for use in large distributed systems such as IP, DCOM
or the World-Wide Web.
4. This approach was of course adopted in ML. At the time, changing Scheme in this way was at
least conceivable, and we seriously considered it for the Indiana Scheme 84 implementation. After
the Revised 3 Report in 1984, such a radical change became impossible. Sussman and Steele
now list this as among the mistakes in the design of Scheme (G.J. Sussman and G.L. Steele
Jr., The First Report on Scheme Revisited, Higher-Order and Symbolic Computation 11(2),
pp.
5. I am not sure to what this refers. My best guess is that it refers to his work with Oles on stack
semantics (J. C. Reynolds, "The Essence of Algol," in J. W. deBakker and J. C. van Vliet,
eds., Algorithmic Languages, pages 345-372. North Holland, Amsterdam, 1981).
--R
Synchronization in actor systems.
Operating Systems Principles.
The Architecture of Concurrent Programs.
Distributed processes: A concurrent programming concept.
The Calculi of Lambda-Conversion
Hierarchical program structures.
Data types
Go to statement considered harmful.
The Denotational Description of Programming Languages.
Viewing control structures as patterns of passing messages.
Monitors: An operating system structuring concept.
The SCHEME-79 chip
FORTH for microcomputers.
A correspondence between ALGOL 60 and Church's lambda-notation: Part I
The next 700 programming languages.
A Theory of Programming Language Semantics.
Form and content in computer science.
Protection in programming languages.
Definitional interpreters for higher-order programming languages
The Logical Design of Operating Systems.
LAMBDA: The ultimate declarative.
The art of the interpreter
The revised report on SCHEME.
SCHEME version 3.1 reference manual.
Modula: a language for modular multiprogramming.
BLISS: A language for systems program- ming
--TR
--CTR
Edoardo Biagioni , Robert Harper , Peter Lee, A Network Protocol Stack in Standard ML, Higher-Order and Symbolic Computation, v.14 n.4, p.309-356, December 2001
Manuel Serrano , Frdric Boussinot , Bernard Serpette, Scheme fair threads, Proceedings of the 6th ACM SIGPLAN international conference on Principles and practice of declarative programming, p.203-214, August 24-26, 2004, Verona, Italy
Zena M. Ariola , Hugo Herbelin , Amr Sabry, A type-theoretic foundation of continuations and prompts, ACM SIGPLAN Notices, v.39 n.9, September 2004 | continuations;operating sytems;multiprocessing;scheme;language design |
609202 | Combining Program and Data Specialization. | Program and data specialization have always been studied separately, although they are both aimed at processing early computations. Program specialization encodes the result of early computations into a new program; while data specialization encodes the result of early computations into data structures.In this paper, we present an extension of the Tempo specializer, which performs both program and data specialization. We show how these two strategies can be integrated in a single specializer. This new kind of specializer provides the programmer with complementary strategies which widen the scope of specialization. We illustrate the benefits and limitations of these strategies and their combination on a variety of programs. | Introduction
Program and data specialization are both aimed at performing computations which depend on
early values. However, they dioeer in the way the result of early computations are encoded: on the
one hand, program specialization encodes these results in a residual program, and on the other
hand, data specialization encodes these results in data structures.
More precisely, program specialization performs a computation when it only relies on early data,
and inserts the textual representation of its result in the residual program when it is surrounded
by computations depending on late values. In essence, it is because a new program is being
constructed that early computations can be encoded in it. Furthermore, because a new program
is being constructed it can be pruned, that is, the residual program only corresponds to the
control AEow which could not be resolved given the available data. As a consequence, program
specialization optimizes the control AEow since fewer control decisions need to be taken. However,
because it requires a new program to be constructed, program specialization can lead to code
explosion if the size of the specialization values is large. For example, this situation can occur
when a loop needs to be unrolled and the number of iterations is high. Not only does code
explosion cause code size problems, but it also degrades the execution time of the specialized
program dramatically because of instruction cache misses.
The dual notion to specializing programs is specializing data. This strategy consists of splitting
the execution of program into two phases. The -rst phase, called the loader, performs the early
computations and stores their results in a data structure called a cache. Instead of generating
a program which contains the textual representation of values, data specialization generates a
program to perform the second phase: it only consists of the late computations and is parameterized
with respect to the result of early computations, that is, the cache. The corresponding program is
named the reader. Because the reader is parameterized with respect to the cache, it is shared by
all specializations. This strategy fundamentally contrasts with program specialization because it
decouples the result of early computations and the program which exploits it. As a consequence,
as the size of the specialization problem increases, only the cache parameter increases, not the
program. In practice, data specialization can handle problem sizes which are far beyond the reach
of program specialization, and thus opens up new opportunities as demonstrated by Knoblock and
Ruf for graphics applications [7, 4]. However, data specialization, by de-nition, does not optimize
the control AEow: it is limited to performing the early computations which are expensive enough to
be worth caching. Because the reader is valid for any cache it is passed, an early control decision
leading to a costly early computation needs to be part of the loader as well as the reader: in
the loader, it decides whether the costly computation much be cached; in the reader, the control
decision determines whether the cache needs to be looked up. In fact, data specialization does
not apply to programs whose bottlenecks are limited to control decisions. A typical example of
this situation is interpreters for low-level languages: the instruction dispatch is the main target of
specialization. For such programs, data specialization can be completely ineoeective.
Perhaps the apparent dioeerence in the nature of the opportunities addressed by program and
data specialization has led researchers to study these strategies in isolation. As a consequence, no
attempt has ever been made to integrate both strategies in a specializer; further, there exist no
experimental data to assess the bene-ts and limitations of these specialization strategies.
In this paper, we study the relationship between program and data specialization with respect
to their underlying concepts, their implementation techniques and their applicability. More
precisely, we study program and data specialization when they are applied separately, as well as
when they are combined (Section 2). Furthermore, we describe how a specializer can integrate
both program and data specialization: what components are common to both strategies and what
components dioeer. In practice, we have achieved this integration by extending a program special-
izer, named Tempo, with the phases needed to perform data specialization (Section 3). Finally,
we assess the bene-ts and limitations of program and data specialization based on experimental
data collected by specializing a variety of programs exposing various features (Section 4).
2 Concepts of Program and Data Specialization
In this section, the basic concepts of both program and data specialization are presented. The
limitations of each strategy are identi-ed and illustrated by an example. Finally, the combination
of program and data specialization is introduced.
2.1 Program Specialization
The partial evaluation community has mainly been focusing on specialization of programs. That
is, given some inputs of a program, partial evaluation generates a residual program which encodes
the result of the early computations which depend on the known inputs. Although program
specialization has successfully been used for a variety of applications (e.g., operating systems
[10, 11], scienti-c programs [8, 12], and compiler generation [2, 6]), it has shown some limitations.
One of the most fundamental limitations is code explosion which occurs when the size of the
specialization problem is large. Let us illustrate this limitation using the procedure displayed
on the left-hand side of Figure 1. In this example, stat is considered static, whereas dyn and
d are dynamic. Static constructs are printed in boldface. Assuming the specialization process
unrolls the loop, variable i becomes static and thus the gi procedures (i.e., g1, g2 and g3) can
be fully evaluated. Even if the gi procedures correspond to non-expensive computations, program
specialization still optimizes procedure f in that it simpli-es its control AEow: the loop and one
of the conditionals are eliminated. A possible specialization of procedure f is presented on the
right-hand side of Figure 1.
However, beyond some number of iterations, the unrolling of a loop, and the computations it
enables, do not pay for the size of the resulting specialized program; this number depends on the
processor features. In fact, as will be shown later, the specialized program can even get slower
than the unspecialized program. The larger the size of the residual loop body, the earlier this
phenomenon happens.
void f (int stat, int dyn, int d[ ]) void f-1(int dyn, int d[ ])
int j;
for
if
if (E-dyn) d[j] +=
(a) Source program (b) Specialized program
Figure
1: Program specialization
For domains like graphics and scienti-c computing, some applications are beyond the reach of
program specialization because the specialization opportunities rely on very large data or iteration
bounds which would cause code explosion if loops traversing these data were unrolled. In this
situation, data specialization may apply.
2.2 Data Specialization
In late eighties, an alternative to program specialization, called data specialization, was introduced
by Barzdins and Bulyonkov [1] and further explored by Malmkj#r [9]. Later, Knoblock and Ruf
studied data specialization for a subset of C and applied it to a graphics application [7].
Data specialization is aimed at encoding the results of early computations in data structures,
not in the residual program. The execution of a program is divided into two stages: a loader -rst
executes the early computations and saves their result in a cache. Then, a reader performs the
remaining computations using the result of the early computations contained in the cache. Let us
illustrate this process by an example displayed in Figure 2. On the left-hand side of this -gure, a
procedure f is repeatedly invoked in a loop with a -rst argument (c) which does not vary (and is thus
considered its second argument, the loop index (k) varies at each iteration. Procedure f is
also passed a dioeerent vector at each iteration, which is assumed to be late. Because this procedure
is called repeatedly with the same -rst argument, data specialization can be used to perform the
computations which depend on it. In this context, many computations can be performed, namely
the loop test, E-stat and the invocation of the gi procedures. Of course, caching an expression
assumes that its execution cost exceeds the cost of a cache reference. Measurements have shown
that caching expressions which are too simple (e.g. a variable occurrence or simple comparisons)
actually cause the resulting program to slow down.
In our example, let us assume that, like the loop test, the cost of expression E-stat is not
expensive enough to be cached. If, however, the gi procedures are assumed to consist of expensive
computations their invocations need to be examined as potential candidate for caching. Since the
-rst conditional test E-stat is early, it can be put in the loader so that whenever it evaluates to
true the invocation of procedure g1 can be cached; similarly, in the reader, the cache is looked
up only if the conditional test evaluates to true. However, the invocation of procedure g2 cannot
be cached according to Knoblock and Ruf's strategy, since it is under dynamic control and thus
caching its result would amount to performing speculative evaluation [7]. Finally, the invocation
of procedure g3 needs to be cached since it is unconditionally executed and its argument is early.
The resulting loader and reader for procedure f are presented on the right-hand side of Figure 2,
as well as their invocations.
extern int w[N][M]; extern int w[N][M];
struct data-cache - int val1;
. int val3;- cache[MAX];
f-load (c, cache);
f (c, k, w[k]); f-read (c, k, w[k], cache);
. void f-load (stat,cache[ ])
int stat;
void f (int stat, int dyn, int d[ ]) struct data-cache cache[ ];
int j; int j;
if
if (E-dyn) d[j] +=
int stat, dyn, d[ ];
struct data-cache cache[ ];
int j;
if
if (E-dyn) d[j] +=
(a) Source program (b) Specialized program
Figure
2: Data specialization
To study the limitations of data specialization consider a program where computations to be
cached are not expensive enough to amortize the cost of memory reference. In our example, assume
the gi procedures correspond to such computations. Then, only the control AEow of procedure f
remains a target for specialization.
2.3 Combining Program and Data Specialization
We have shown the bene-ts and limitations of both program and data specialization. The main
parameters to determine which strategy -ts the specialization opportunities are the cost of the
early computations and the size of the specialization problem. Obviously, within the same program
(or even a procedure), some fragments may require program specialization and others data
specialization. As a simple example consider a procedure which consists of two nested loops. The
innermost loop may require few iterations and thus allow program specialization to be applied.
Whereas, the outermost loop may iterate over a vector whose size is very large; this may prevent
program specialization from being applied, but not data specialization from exploiting some
opportunities.
Concretely performing both program and data specialization can be done in a simple way. One
approach consists of doing data specialization -rst, and then applying the program specializer on
either the loader or the reader, or both. The idea is that code explosion may not be an issue in
one of these components; as a result, program specialization can further optimize the loader or
the reader by simplifying its control AEow or performing speculative specialization. For example,
a reader may consist of a loop whose body is small; this situation may thus allow the loop to be
unrolled without causing the residual program to be too large. Applying a program specializer to
both the reader and the loader may be possible if the fragments of the program, which may cause
code explosion, are made dynamic.
Alternatively, program specialization can be performed prior to data specialization. This
combination requires program specialization to be applied selectively so that only fragments which
do not cause code explosion are specialized. Then, the other fragments ooeering specialization
opportunities can be processed by data specialization.
As is shown in Section 4, in practice combining both program and data specialization allows
better performance than pure data specialization and prevents the performance gain from dropping
as quickly as in the case of program specialization as the problem size increases.
Integrating Program and Data Specialization
We now present how Tempo is extended to perform data specialization. To do so let us brieAEy
describe its features which are relevant to both data specialization and the experiments presented
in the next section.
3.1
Tempo is an ooe-line program specializer for C programs. As such, specialization is preceded by
a preprocessing phase. This phase is aimed at computing information to guide the specialization
process. The main analyses of Tempo's preprocessing phase are an alias analysis, a side-eoeect
analyses, a binding-time analysis and an action analysis. The -rst two analyses are needed because
of the imperative nature of the C language, whereas the binding-time analysis is typical of any
ooe-line specializer. The action analysis is more unusual: it computes the specialization actions
(i.e., the program transformations) to be performed by the specialization phase.
The output of the preprocessing phase is a program annotated with specialization actions.
Given some specialization values, this annotated program can be used by the specialization phase
to produce a residual program at compile time, as is traditionally done by partial evaluators. In
addition, Tempo can specialize a program at run time. Tempo's run-time specializer is based on
templates which are eOEciently compiled by standard C compilers [3, 12].
Tempo has been successfully used for a variety of applications ranging from operating systems
[10, 11] to scienti-c programs [8, 12].
3.2 Extending Tempo with Data Specialization
Tempo includes a binding-time analysis which propagates binding times forward and backward.
The forward analysis aims at determining the static computations; it propagates binding times
from the de-nitions to the uses of variables. The backward analysis performs the same propagation
in the opposite direction; when uses of a variable are both static and dynamic, its de-nition is
annotated static&dynamic. This annotation indicates that the de-nition should be evaluated both
at specialization time and run time. This process, introduced by Hornof et al., allows a binding-time
analysis to be more accurate; such an analysis is said to be use sensitive [5]. When a de-nition
is static&dynamic and occurs in a control construct (e.g., while), this control construct becomes
static&dynamic as well. The specialized program is the code where constructs and expressions
annoted static are evaluated at specialization time and its result are introduced in the residual
code and where constructs and expressions annoted dynamic or static&dynamic are rebuilt in the
residual code.
To perform data specialization an analysis is inserted between the forward analysis and backward
analysis. In essence, this new phase identi-es the frontier terms, that is, static terms occurring
in a dynamic (or static&dynamic) context. If the cost of the frontier term is below
a given threshold (de-ned as a parameter of the data specializer), it is forced to dynamic (or
static&dynamic).
Furthermore, because data specialization does not perform speculative evaluation, static computations
which are under dynamic control are made dynamic.
Once these adjustments are done, the backward phase of the binding-time analysis then determines
the -nal binding times of the program. Later in the process, the static computations are
included in the loader and the dynamic computations in the reader; the frontier terms are cached.
The rest of our data specializer is the same as Knoblock and Ruf's.
Performance Evaluation
In this section, we compare the performance obtained by applying dioeerent specialization strategies
on a set of programs. This set includes several scienti-c programs and a system program.
4.1
Overview
Machine and Compiler. The measurements presented in this paper were obtained using a Sun
Model 170 with 448 mega bytes of main memory, running Sun-OS version 5.5.1.
Times were measured using the Unix system call getrusage and include both iuserj and isystemj
times.
Figure
3 displays the speedups and the size increases of compiled code obtained for dioeerent
specialization strategies. For each benchmark, we give the program invariant used for specialization
and an approximation of its time complexity. The code sources are included in the appendices.
All the programs were compiled with gcc -O2. Higher degrees of optimization did not make a
dioeerence for the programs used in this experiment.
Specialization strategies. We evaluate the performance of -ve dioeerent specialization meth-
ods. The speedup is the ratio between the execution times of the specialized program and the
original one. The size increases is the ratio between the size of the specialized program and the
original one. The data displayed in Figure 3 correspond to the behavior of the following specialization
strategies:
ffl PS-CT: the program is program specialized at compile time.
ffl PS-RT: the program is program specialized at run time.
ffl DS: the program is data specialized.
PS-CT: the program is data specialized and program specialized at compile time. The
loops which manipulate the cache (for data specialization) are kept dynamic to avoid code
explosion.
PS-RT: the program is data specialized and program specialized at run time. As in
the previous strategy, the loops which manipulate the cache are kept dynamic to avoid code
explosion.
Source programs. We consider a variety of source programs: a one-dimensional fast Fourier
transformation (FFT), a Chebyshev approximation, a Romberg integration, a Smirnov integration,
a cubic spline interpolation and a Berkeley packet -lter (BPF). Given the specialization strategies
available, these programs can be classi-ed as follows.
Control AEow intensive. A program which mainly exposes control AEow computations; data AEow
computations are inexpensive. In this case, program specialization can improve performance
whereas data specialization does not because there is no expensive calculations to cache.
Data AEow intensive. A program which is only based on expensive data AEow computations. As
a result, program specialization at compile time as well as data specialization can improve
the performance of such program.
Control and data AEow intensive. A program which contains both control AEow computations
and expensive data AEow computations. Such program is a good candidate for program
specialization at compile time when applied to small values, and well-suited for data specialization
when applied to large values.
We now analyze the performance of -ve specialization methods in turn on the benchmark
programs.
4.2 Results
Data specialization can be executed at compile time or at run time. At run time, the loader of
the cache is executed before the execution of the specialized program, while at compile time, the
cache is constructed before the compilation. The cache is then used by the specialized program
during the execution. For all programs, data specialization yields a greater speedup than program
specialization at run time. The combination of these two specialization strategies does not make
a better result.
In this section, we characterize dioeerent opportunities of specialization to illustrate our method
in the three categories of program.
4.2.1 Program Specialization
We analyze two programs where performance is better with program specialization: the Berkeley
packet -lter (BPF), which interprets a packet with respect to an interpreter program, and the cubic
spline interpolation, which approximates a function using a third degree polynomial equation.
Characteristics: For the BPF, the program consists exclusively of conditionals whose tests and
branches contain inexpensive expressions. For the cubic spline interpolation, the program consists
of small loops whose small body can be evaluated in part. Concretely, a program which mainly
depends on the control AEow graph and whose leaves contain few calculations but partially reducible,
is a good candidate for program specialization. By program specialization, the control AEow graph
is reduced and some calculations are eliminated. Since there is no static calculation expensive
enough to be eOEciently cached by data specialization, the specialized program is mostly the
same as the original one. For this kind of programs, only program specialization gives signi-cant
improvements: it reduces the control AEow graph and it produces a small specialized program.
Applications: The BPF (Appendix F) is specialized with respect to a program (of size n). It
mainly consists of the conditionals; its time complexity is linear in the size of the program and
it does not contain expensive data computations. As the program does not contain any loop, the
size of the specialized program is mostly the same as the original one. In Figure 3-F, program
specialization at compile time and at run time yields a good speedup, whereas data specialization
only improves performance marginally. The combination of program and data specialization does
not improve the performance further.
The cubic spline interpolation (Appendix E) is specialized with respect to the number of points
(n) and their x-coordinates. It contains three singly nested loops; its time complexity is O(n). In
the -rst two loops, more than half of the computations of each body can be completely evaluated
or cached by specialization, including real multiplications and divisions. Nevertheless, there is no
expensive calculation to cache, and data specialization does not improve performance signi-cantly.
The unrolled loop does not really increase the code size because of the small complexity of the
program and the small body of the loop. As a consequence, for each number of points n, the
Figure
3: Program, data and combined specializations
speedup of each specialization barely changes. In Figure 3-E, program specialization at compile
time produces a good speedup, whereas program specialization at run time does not improve
performance. Data specialization obtains a minor speedup because the cached calculations are
not expensive.
4.2.2 Program Specialization or Data Specialization
We now analyze two programs where performance is identical to program specialization or data
specialization: the polynomial Chebyshev, which approximates a continuous function in a known
interval, and the Smirnov integration, which approximates the integral of a function on an interval
using estimations.
Characteristics: These two programs only contain loops and expensive calculations in doubly
nested loops. As for the cubic spline interpolation (Section 4.2.1), more than half of the computations
of each body loop can be completely evaluated or cached by specialization. In contrast with
cubic spline interpolation, the static calculations in Chebyshev and Smirnov are very expensive
and allow data specialization to yield major improvements. For the combined specialization, data
specialization is applied to the innermost loop and program specialization is applied to the rest
of the program. For this kind of programs, program and data specialization both give signi-cant
improvements. However, for the same speedup, the code size of the program produced by program
specialization is a hundred times larger than the specialized program using data specialization.
Applications: The Chebyshev approximation (Appendix C) is specialized with respect to the
degree (n) of the generated polynomial. This program contains two calls to the trigonometric
function cos: one of them in a singly nested loop and the other call in a doubly nested loop.
Since this program mainly consists of data AEow computations, program specialization and data
specialization obtain similar speedups (see Figure 3-C).
The Smirnov integration (Appendix D) is specialized with respect to the number of iterations
(n, m). The program contains a call to the function fabs which returns the absolute value of its
parameter. This function is contained in a doubly nested loop and the time complexity of this
program is O(m n ). As in the case of Chebyshev, program and data specialization produce similar
speedups (see Figure 3-D).
4.2.3 Combining Program Specialization and Data Specialization
Finally, we analyze two programs where performance improves using program specialization when
values are small, and data specialization when values are large: the FFT and the Romberg in-
tegration. The FFT converts data from the time domain to a frequency domain. The Romberg
integration approximates the integral of a function on an interval using estimations.
Characteristics: These two programs contain several loops and expensive data AEow computations
in doubly nested loops; however more than half of the computations of each loop body cannot
be evaluated. Beyond some number of iterations, when the program specialization unrolls these
loops, it increases the code size of the specialized program and then degrades performance. The
specialized program becomes slower because of its code size. Furthermore, beyond some problem
size, the specialization process cannot produce the program because of its size. In contrast, data
specialization only caches the expensive calculations, does not unroll loops, and improves perfor-
mance. The result is that the code size of the program produced by program specialization is a
hundred times larger than the specialized program using data specialization, for a speedup gain of
20%. The combined specialization delays the occurrence of code explosion. Data specialization is
applied to the innermost loop, which contains the cache computations, and program specialization
is applied to the rest of the program.
Applications: The FFT (Appendix A) is specialized with respect to the number of data points
(N ). It contains ten loops with several degrees of nesting. One of these loops, with complexity
contains four calls to trigonometric functions, which can be evaluated by program
specialization or cached by data specialization. Due to the elimination of these expensive library
calls, program specialization and data specialization produce signi-cant speedups (see Figure 3-A).
However, in the case of program specialization, code unrolling degrade performance. In contrast,
data specialization produces a stable speedup regardless of the number of data points. When N
is smaller than 512, data specialization does not obtain a better result in comparison to program
specialization. However, when N is greater than 512, program specialization becomes impossible
to apply because of the specialization time and the size of the residual code. In this situation,
data specialization still gives better performance than the unspecialized program. Because this
program also contains some conditionals, combined specialization, where the innermost loop is not
unrolled, improves performance better than data specialization alone.
The Romberg integration (Appendix B) is specialized with respect to the number of iterations
(M) used in the approximation. The Romberg integration contains two calls to the costly function
intpow. It is called twice: once in a singly nested loop and another time in a doubly nested
loop. Because both specialization strategies eliminate these expensive library calls, the speedup
is consequently good. As for FFT, loop unrolling causes the program specialization speedup to
decrease, whereas the data specialization speedup still remains the same, even when M increases
Figure
3-B).
5 Conclusion
We have integrated program and data specialization in a specializer named Tempo. Importantly,
data specialization can re-use most of the phases of an ooe-line program specializer.
Because Tempo now ooeers both program and data specialization, we have experimentally
compared both strategies and their combination. This evaluation shows that, on the one hand
program specialization typically gives better speed-up than data specialization for small problem
size. However, as the problem size increases, the residual program may become very large and often
slower than the unspecialized program. On the other hand, data specialization can handle large
problem size without much performance degradation. This strategy can, however, be ineoeective
if the program to be specialized mainly consists of control AEow computations. The combination
of both program and data specialization is promising: it can produce a residual program more
eOEcient than with data specialization alone, without dropping in performance as dramatically as
program specialization, as the problem size increases.
Acknowledgments
We thank Renaud Marlet for thoughtful comments on earlier versions of this paper, as well as the
Compose group for stimulating discussions.
A substantial amount of the research reported in this paper builds on work done by the authors
with Scott Thibault on Berkeley packet -lter and Julia Lawall on Fast Fourier Transformation.
--R
Mixed computation and translation: Linearisation and decomposition of compilers.
Tutorial notes on partial evaluation.
Specializing shaders.
Partial Evaluation and Automatic Program Genera- tion
Data specialization.
Faster Fourier transforms via automatic program specialization.
Program and data specialization: Principles
Fast, optimized Sun RPC using automatic program specialization.
Scaling up partial evaluation for optimizing the Sun commercial RPC protocol.
--TR
--CTR
Jung Gyu Park , Myong-Soon Park, Using indexed data structures for program specialization, Proceedings of the ASIAN symposium on Partial evaluation and semantics-based program manipulation, p.61-69, September 12-14, 2002, Aizu, Japan
Vytautas tuikys , Robertas Damaeviius, Metaprogramming techniques for designing embedded components for ambient intelligence, Ambient intelligence: impact on embedded system design, Kluwer Academic Publishers, Norwell, MA,
Mads Sig Ager , Olivier Danvy , Henning Korsholm Rohde, On obtaining Knuth, Morris, and Pratt's string matcher by partial evaluation, Proceedings of the ASIAN symposium on Partial evaluation and semantics-based program manipulation, p.32-46, September 12-14, 2002, Aizu, Japan
Charles Consel , Julia L. Lawall , Anne-Franoise Le Meur, A tour of tempo: a program specializer for the C language, Science of Computer Programming, v.52 n.1-3, p.341-370, August 2004
Torben Amtoft , Charles Consel , Olivier Danvy , Karoline Malmkjr, The abstraction and instantiation of string-matching programs, The essence of computation: complexity, analysis, transformation, Springer-Verlag New York, Inc., New York, NY, 2002 | program specialization;data specialization;partial evaluation;program transformation;combining program |
609203 | Certifying Compilation and Run-Time Code Generation. | A certifying compiler takes a source language program and produces object code, as well as a certificate that can be used to verify that the object code satisfies desirable properties, such as type safety and memory safety. Certifying compilation helps to increase both compiler robustness and program safety. Compiler robustness is improved since some compiler errors can be caught by checking the object code against the certificate immediately after compilation. Program safety is improved because the object code and certificate alone are sufficient to establish safety: even if the object code and certificate are produced on an unknown machine by an unknown compiler and sent over an untrusted network, safe execution is guaranteed as long as the code and certificate pass the verifier.Existing work in certifying compilation has addressed statically generated code. In this paper, we extend this to code generated at run time. Our goal is to combine certifying compilation with run-time code generation to produce programs that are both fast and verifiably safe. To achieve this goal, we present two new languages with explicit run-time code generation constructs: Cyclone, a type safe dialect of C, and TAL/T, a type safe assembly language. We have designed and implemented a system that translates a safe C program into Cyclone, which is then compiled to TAL/T, and finally assembled into executable object code. This paper focuses on our overall approach and the front end of our system; details about TAL/T will appear in a subsequent paper. | Introduction
1.1 Run-time specialization
Specialization is a program transformation that optimizes a
program with respect to invariants. This technique has been
shown to give dramatic speedups on a wide range of appli-
cations, including aircraft crew planning programs, image
shaders, and operating systems [4, 11, 17]. Run-time specialization
exploits invariants that become available during the
execution of a program, generating optimized code on the
fly. Opportunities for run-time specialization occur when
dynamically changing values remain invariant for a period
of time. For example, networking software can be specialized
to a particular TCP connection or multicast tree.
Run-time code generation is tricky. It is hard to correctly
write and reason about code that generates code; it is not
obvious how to optimize or debug a program that has yet
to be generated. Early examples of run-time code generation
include self-modifying code, and ad hoc code generators
written by hand with a specific function in mind. These approaches
proved complicated and error prone [14].
More recent work has applied advanced programming
language techniques to the problem. New source languages
have been designed to facilitate run-time code generation
by providing the programmer with high-level constructs and
having the compiler implement the low-level details [15, 21,
22]. Program transformations based on static analyses are
now capable of automatically translating a normal program
into a run-time code generating program [6, 10, 12]. And
type systems can check run-time code generating programs
at compile time, ensuring that certain bugs will not occur
at run time (provided the compiler is correct) [22, 25].
These techniques make it easier for programmers to use
run-time code generation, but they do not address the concerns
of the compiler writer or end user. The compiler writer
still needs to implement a correct compiler-not easy even
for a language without run-time code generation. The end
user would like some assurance that executables will not
crash their machine, even if the programs generate code
and jump to it-behavior that usually provokes suspicion
in security-concious users. We will address both of these
concerns through another programming language technique,
certifying compilation.
1.2 Certifying compilation
A certifying compiler takes a source language program and
produces object code and a "certificate" that may help to
show that the object code satisfies certain desirable properties
[16, 18]. A separate component called the verifier examines
the object code and certificate and determines whether
the object code actually satisfies the properties. A wide
range of properties can be verified, including memory safety
(unallocated portions of memory are not accessed), control
safety (code is entered only at valid entry points), and various
security properties (e.g., highly classified data does appear
on low security channels). Often, these properties are
corollaries of type safety in an appropriate type system for
the object code.
In this paper we will describe a certifying compiler for
Cyclone, a high-level language that supports run-time code
generation. Cyclone is compiled into TAL/T, an assembly
language that supports run-time code generation. Cyclone
and TAL/T are both type safe; the certificates of our system
are the type annotations of the TAL/T output, and the
verifier is the TAL/T type checker.
As compiler writers, we were motivated to implement
Cyclone as a certifying compiler because we believe the approach
enhances compiler correctness. For example, we were
forced to develop a type system and operational semantics
for TAL/T. This provides a formal framework for reasoning
about object code that generates object code at run time.
Eventually, we hope to prove that the compiler transforms
type correct source programs into type correct object pro-
grams, an important step towards proving correctness for
the compiler. In the meantime, we use the verifier to type
check the output of the compiler, so that we get immediate
feedback when our compiler introduces type errors. As
others have noted [23, 24], this helps to identify and correct
compiler bugs quickly.
We also wanted a certifying compiler to address the safety
concerns of end users. In our system, type safety only depends
on the certificate and the object code, and not on the
method by which they are produced. Thus the end user does
not have to rely on the programmer or the Cyclone compiler
to ensure safety. This makes our system usable as the basis
of security-critical applications like active networks and
mobile code systems.
1.3 The Cyclone compiler
The Cyclone compiler is built on two existing systems, the
specializer [19] and the Popcorn certifying compiler
[16]. It has three phases, shown in Fig. 1.
The first phase transforms a type safe C program into
a Cyclone program that uses run-time code generation. It
starts by applying the static analyses of the Tempo system
to a C program and context information that specifies
which function arguments are invariant. The Tempo front
end produces an action-annotated program. We added an
additional pass to translate the action-annotated program
into a Cyclone run-time specializer.
The second phase verifies that the Cyclone program is
type safe, and then compiles it into TAL/T. To do this, we
modified the Popcorn compiler of Morrisett et al.; Popcorn
compiles a type safe dialect of C into TAL, a typed assembly
language. We extended the front end of Popcorn to handle
Cyclone programs, and modified its back end so that it
outputs TAL/T. TAL/T is TAL extended with instructions
for manipulating templates, code fragments parameterized
by holes, and their corresponding types. This compilation
phase not only transforms high-level Cyclone constructs into
low-level assembly instructions, but also transforms Cyclone
types into TAL/T types.
The third phase first verifies the type safety of the TAL/T
program. The type system of TAL/T ensures that the templates
are combined correctly and that holes are filled in
correctly. This paper describes our overall approach and
the front end in detail, but the details of TAL/T will appear
in a subsequent paper. Finally, the TAL/T program is
assembled and linked into an executable.
This three phase design offers a very flexible user interface
since it allows programs to be written in C, Cyclone,
or TAL/T. In the simplest case, the user can simply write
a C program (or reuse an existing program) and allow the
system to handle the rest. If the user desires more explicit
control over the code generation process, he may write (or
modify) a Cyclone program. If very fine-grain control is de-
sired, the user can fine-tune a TAL/T program produced by
Cyclone, or can write one by hand. Note that, since verification
is performed at the TAL/T level, the same program
Cyclone
verify & compile Cyclone
verify, assemble, link
executable
translate action-annotated
program to Cyclone
Figure
1: Overview of the Cyclone compiler
safety properties are guaranteed in all three of these cases.
1.4 Example
We now present an example that illustrates run-time code
generation and the phases of our Cyclone compiler. Fig. 2
shows a modular exponentiation function, mexp, written in
standard C. Its arguments are a base value, an exponent,
and a modulus. Modular exponentiation is often used in
cryptography; when the same key is used to encrypt or decrypt
several messages, the function is called repeatedly with
the same exponent and modulus. Thus mexp can benefit
from specialization.
To specialize the function with respect to a given exponent
and modulus, the user indicates that the two arguments
are invariant : the function will be called repeatedly
with the same values for the invariant arguments. In Fig. 2,
invariant arguments are shown in italics. A static analysis
propagates this information throughout the program, producing
an action-annotated program. Actions describe how
each language construct will be treated during specializa-
tion. Constructs that depend only on invariants can be evaluated
during specialization; these constructs are displayed
in italics in the second part of the figure.
To understand how run-time specialization works, it is
C code (invariant arguments in italics)
int mexp(int base, int exp, int mod)
f
int s, t, u;
while
if ((u&1) !=
Action-annotated code (italicized constructs can be evaluated)
int mexp(int base, int exp, int mod)
f
int s, t, u;
while
Specialized source code
int mexp sp(int base)
f
int s, t;
Figure
2: Specialization at the source level
helpful to first consider how specialization could be achieved
entirely within the source language. In our example, the
specialized function mexp sp of Fig. 2 is obtained from the
action-annotated mexp when the exponent is 10 and the
modulus is 1234. Italicized constructs of mexp, like the while
loop, can be evaluated (note that the loop test depends only
on the known arguments). Non-italicized constructs of mexp
show up in the source code of mexp sp. These constructs can
only be evaluated when mexp sp is called, because they depend
on the unknown argument.
We can think of mexp sp as being constructed by cutting
and pasting together fragments of the source code of mexp.
These fragments, or templates, are a central idea we used
in designing Cyclone. Cyclone is a type safe dialect of C
extended with four constructs that manipulate templates:
codegen, cut, splice, and fill. Using these constructs,
it is possible to write a Cyclone function that generates a
specialized version of mexp at run time.
int (int) mexp gen(int exp, int mod)
f
int u;
return codegen(
int mexp sp(int base) f
int s, t;
cut
while
splice
splice
Figure
3: A run-time specializer written in Cyclone
In fact, our system can automatically generate a Cyclone
run-time specializer from an action-annotated pro-
gram. Fig. 3 shows the Cyclone specializer produced from
the action-annotated modular exponentiation function of
Fig. 2. The function mexp gen takes the two invariant arguments
of the original mexp function and returns the function
mexp sp, a version of mexp specialized to those arguments.
In the figure we have italicized code that will be evaluated
when mexp gen is called. Non-italicized code is template
code that will be manipulated by mexp gen to produce the
specialized function. The template code will only be evaluated
when the specialized function is itself called.
In our example, the codegen expression begins the code
generation process by allocating a region in memory for the
new function mexp sp, and copying the first template into
the region. This template includes the declarations of the
function, its argument base, and local variables s and t,
and also the initial assignments to s and t. Recall that this
template code is not evaluated during the code generation
process, but merely manipulated.
The cut statement marks the end of the template and introduces
code (italicized) that will be evaluated during code
generation: namely, the while loop. The while test and
body, including the conditional statement, splice state-
ments, and shift/assignment statement, will all be evalu-
ated. After the while loop finishes, the template following
the cut statement (containing return(s)) will be added to
the code generation region.
Evaluating a splice statement causes a template to be
appended to the code generation region. In our example,
each time the first splice statement is executed, an assignment
to s is appended. Similarly, each time the second
splice statement is executed, an assignment to t is ap-
pended. The effect of the while loop is thus to add some
number of assignment statements to the code of mexp sp;
exactly how many, and which ones, is determined by the
arguments of mexp gen.
A fill expression can be used within a template, and it
marks a hole in the template. When fill(e) is encountered
in a template, e is evaluated at code generation time to a
value, which is then used to fill the hole in the template.
In our example, fill is used to insert the known modulus
value into the assignment statements.
After code generation is complete, the newly generated
function mexp sp is returned as the result of codegen. It
takes the one remaining argument of mexp to compute its
result.
Cyclone programs can be evaluated symbolically to produce
specialized source programs, like the one in Fig. 2; this
is the basis of the formal operational semantics we give in
the appendix. In our implementation, however, we compile
Cyclone source code to object code, and we compile source
templates into object templates. The Cyclone object code
then manipulates object templates directly.
Our object code, TAL/T, is an extension of TAL with
instructions for manipulating object templates. Most of the
TAL/T instructions are x86 machine instructions; the new
template instructions are CGSTART, CGDUMP, CGFILL, CGHOLE,
START, and TEMPLATE END. For example, the Cyclone
program in Fig. 3 is compiled into the TAL/T program
shown in Fig. 4. (We omitted some instructions to
save space, and added source code fragments in comments
to aid readability.)
The beginning of mexp gen contains x86 instructions for
adding the local variable u to the stack and assigning it the
value of the argument exp. Next, CGSTART is used to dynamically
allocate a code generation region, and the first template
is dumped (copied) into the region with the CGDUMP
instruction. Next, the body of the loop is unrolled. Each
Cyclone splice statement is compiled into a CGDUMP instruc-
tion, followed by instructions for computing hole values and
a CGFILL instruction for filling in the hole. At the end of the
mexp gen function, a final CGDUMP instruction outputs code
for the last template.
Next comes the code for each of the four templates. The
first template allocates stack space for local variables s and
t and assigns values to them. The second and third templates
come from the statements contained within the Cyclone
splice instructions, i.e., the multiplications, mods,
and assignments. The final template contains the code for
return(s). Each CGHOLE instruction introduces a place-holder
inside a template, filled in during specialization as
described above.
Summary
We designed a system for performing type safe run-time code
generation. It has the following parts:
ffl C to action-annotated program translation
Action-annotated program to Cyclone translation
ffl Cyclone language design
ffl Cyclone verifier
ffl Cyclone to TAL/T compiler
ffl TAL/T language design
ffl TAL/T verifier
-mexp-gen:
MOV [ESP+0],EAX
1st template)
ifend$24:
MOV EAX,[ESP+0]
MOV [ESP+0],EAX
whileend$22:
CGEND EAX
RETN
(1st template)
MOV [ESP+0],EAX
TEMPLATE-START splc-beg$25,splc-end$26
TEMPLATE-END splc-end$26
TEMPLATE-END splc-end$32
TEMPLATE-START cut-beg$36,cut-end$37
ADD ESP,8
RETN
TEMPLATE-END cut-end$37
Figure
4: TAL/T code
ffl TAL/T to assembly translation
ffl Assembler/Linker
For some parts, we were able to reuse existing software.
Specifically, we used Tempo for action-annotated program
generation, Microsoft MASM for assembling, and Microsoft
Visual C++ for linking. Other parts extend existing work.
This was the case for the Cyclone language, type system,
verifier, compiler, and the TAL/T language. Some components
needed to be written from scratch, including the translation
from an action-annotated program into a Cyclone pro-
gram, and the definition of the new TAL/T instructions in
terms of x86 instructions.
We've organized the rest of the paper as follows. In Section
2, we present the Cyclone language and its type sys-
tem. In Section 3, we give a brief description of TAL/T;
due to limited space we defer a full description to a later pa-
per. We give implementation details and initial impressions
about performance in Section 4. We discuss related work in
Section 5, and future work in Section 6. Our final remarks
are in Section 7.
2.1 Design decisions
Cyclone's codegen, cut, splice, and fill constructs were
designed to express a template-based style of run-time code
generation cleanly and concisely. We made some other design
decisions based on Cyclone's relationship to the C programming
language, and on implementation concerns.
First, because a run-time specializer is a function that
returns a function as its result, we need higher order types
in Cyclone. In C, higher order types can be written using
pointer types, but Cyclone does not have pointers. There-
fore, we introduce new notation for higher order types in
Cyclone. For example:
int (float,int) f(int x) -
This is a Cyclone function f that takes an int argument x,
and returns a function taking a float and an int and returning
an int. When f is declared and not defined, we use
int (int) (float,int) f;
Note that the type of the first argument appears to the left of
the remaining arguments. This is consistent with the order
the arguments would appear in C, using pointer types.
A second design decision concerns the extent to which
we should support nested codegen's. Consider the following
example.
int (float) (int) f(int x) -
return(codegen(
int (int) g(float y) -
return(codegen(
int h(int z) -
. body of h .
Here f is a function that generates a function g using
codegen when called at run time. In turn, g will generate a
function h each time it is called. Nested codegen's are thus
used to generate code that generates code. The first version
of Tempo did not support code that generates code (though
it has recently been extended to do so), and some other sys-
tems, such as 'C [20, 21], also prohibit it. We decided to
permit it in Cyclone, because it adds little complication to
our type system or implementation. Nested codegen's are
not generated automatically in Cyclone, because of the version
of Tempo that we use, but the programmer can always
them explicitly.
A final design decision concerns the extent to which Cyclone
should support lexically scoped bindings. In the last
example, the function h is nested inside of two other func-
tions, f and g. In a language with true lexical scoping, the
arguments and local variables of these outer functions would
be visible within the inner function: f, x, g, and y could be
used in the body of h.
We decided that we would not support full lexical scoping
in Cyclone. Our scoping rule is that in the body of
a function, only the function itself, its arguments and local
variables, and top-level variables are visible. This is in keeping
with C's character as a low-level, machine- and systems-oriented
language: the operators in the language are close
to those provided by the machine, and the cost of executing
a program is not hidden by high-level abstractions. We felt
that closures and lambda lifting, the standard techniques for
supporting lexical scoping, would stray too far from this. If
lexical scoping is desired, the programmer can introduce explicit
closures. Or, lexical scoping can be achieved using the
Cyclone features, for example, if y is needed in the body of
h, it can be accessed using fill(y).
2.2 Syntax and typing rules
Now we formalize a core calculus of Cyclone. Full Cyclone
has, in addition, structures, unions, arrays, void, break and
continue, and for and do loops.
We use x to range over variables, c to range over con-
stants, and b to range over base types. There is an implicit
signature assigning types to constants, so that we can speak
of "the type of c." Figure 5 gives the grammars for programs
modifiers m, types t, declarations d, sequences D
of declarations, function definitions F , statements s, and
expressions e.
We write t ffl m for the type of a function from m to t:
D is defined to be the modifier
so that a function definition t x(D) s
declares x to be of type t ffl e
D.
We sometimes consider a sequence
of declarations to be a finite function from variables to types:
This assumes that the x i are
distinct; we achieve this by alpha conversion when neces-
sary, and by imposing some standard syntactic restrictions
on Cyclone programs (the names of a function and its formal
parameters must be distinct, and global variables have
distinct names).
We define type environments E to support Cyclone's
scoping rules:
local ); D global
local
local
Informally, a type environment is a sequence of hidden and
visible frames, followed by an outermost frame that gives
vis
outermost(t
Figure
Cyclone environment functions
Programs
Modifiers m ::=
Types
Declarations d ::= t x
Decl. sequences D ::=
Function defns. F ::= t x(D) s
Statements s ::= e;
return e;
splice s
Expressions e ::= x
Figure
5: The grammar of core Cyclone
the type of a top level function, the types of its local vari-
ables, and the types of global variables. The non-outermost
frames contain the type of a function that will be generated
at run time, and types for the parameters and local variables
of the function. If E is a type environment, we write E vis
for the visible declarations of E; E vis is defined in Figure 6.
Informally, the definition says that the declarations of the
first non-hidden frame and the global declarations are vis-
ible, and all other declarations are not visible. Note that
vis is a sequence of declarations, so we may write E vis (x)
for the type of x in E.
Figure
6 also defines two other important operations on
environments: rtype(E) is the return type for the function
of the first non-hidden frame, and E + d is the environment
obtained by adding declaration d to the local declarations
of the first non-hidden frame.
The typing rules of Cyclone are given in Figure 7. The
interesting rules are those for codegen, cut, splice, and
fill.
A codegen expression starts the process of run time code
generation. To type codegen(t x(D) s) in an environment
E, we type the body s of the function in an environment
This makes the function x and its
parameters D visible in the body, while any enclosing func-
tion, parameters, and local variables will be hidden.
An expression fill(e) should only appear within a tem-
plate. Our typing rule ensures this by looking at the envi-
ronment: it must have the form frame(t x(D); D 0
If so, the expression fill(e) is typed if e is typed in the environment
That is, the function
being generated with codegen, as well as its parameters and
local variables, are hidden when computing the value that
will fill the hole. This is necessary because the parameters
and local variables will not become available until the function
is called; they will not be available when the hole is
filled.
The rules for cut and splice are similar. Like fill,
cut can only be invoked within a template, and it changes
frame to hidden for the same reason as fill. Splice is the
dual of cut; it changes a frame hidden by cut back into a
visible frame. Thus splice introduces a template, and cut
interrupts a template.
(p is a well-formed program)
well-formed statement)
is the type of the constant c
Figure
7: Typing rules of Cyclone
An operational semantics for Cyclone and safety theorem
are given in an appendix.
The output of the Cyclone compiler is a program in TAL/T,
an extension of the Typed Assembly Language (TAL) of
Morrisett et al. [16]. In designing TAL/T, our primary concern
was to retain the low-level, assembly language character
of TAL. Most TAL instructions are x86 machine in-
structions, possibly annotated with type information. The
exceptions are a few macros, such as malloc, that would be
difficult to type in their expanded form; each macro expands
to a short sequence of x86 instructions. Since each instruction
is simple, the trusted components of the system-the
typing rules, the verifier, and the macros-are also simple.
This gives us a high degree of confidence in the correctness
and safety of the system.
TAL already has instructions that are powerful enough to
generate code at run time: malloc and move are sufficient.
The problem with this approach is in the types. If we malloc
a region for code, what is its type? Clearly, by the end of the
code generation process, it should have the type of TAL code
that can be jumped to. But at the start of code generation,
when it is not safe to jump to, it must have a different
type. Moreover, the type of the region should change as
we move instructions into it. The TAL type system is not
powerful enough to show that a sequence of malloc and
move instructions results in a TAL program that can safely
be jumped to.
Our solution, TAL/T, is an extension of TAL with some
types and macros for manipulating templates. Since this
paper focuses on Cyclone and the front end of system, we
will only sketch the ideas of TAL/T here. Full details will
appear in a subsequent paper.
In TAL, a procedure is just the label or address of a sequence
of TAL instructions. A procedure is called by jumping
to the label or address. The type of a procedure is
a precondition that says that on entry, the x86 registers
should contain values of particular types. For example, if
a procedure is to return it will have a precondition saying
that a return address should be accessible through the stack
pointer when it is jumped to.
In TAL/T, a template is also the label of a sequence
of instructions. Unlike a TAL procedure, however, a template
is not meant to be jumped to. For example, it might
need to be concatenated with another template to form a
TAL procedure. Thus the type of a template includes a
postcondition as well as a precondition. Our typing rules
for the template instructions of TAL/T will ensure that before
a template is dumped into a code generation region,
its precondition matches the postcondition of the previous
template dumped. Also, a template may have holes that
need to be filled; the types of these holes are also given in
the type of the template.
The type of a code generation region is very similar to
that of a template: it includes types for the holes that remain
to be filled in the region, the precondition of the first
template that was dumped, and the postcondition of the
last template that was dumped. When all holes have been
filled and a template with no postcondition is dumped, the
region will have a type consisting of just a precondition, i.e.,
the type of a TAL procedure. At this point code generation
is finished and the result can be jumped to.
int f(int x) -
return(codegen(
int g(int y) -
return
int h(int x)(int) -
return(codegen(
int k(int y) -
Figure
8: An example showing that two codegen expressions
can be executing at once. When called, h starts generating
k, but stops in the middle to call f which generates g.
Now we give a brief description of the new TAL/T macros.
This is intended to be an informal description showing that
each macro does not go beyond what is already in TAL-the
macros are low level, and remain close to machine code.
The macros manipulate an implicit stack of code generation
regions. Each region in the stack is used for a function
being generated by a codegen. The stack is needed because
it is possible to have two codegen expressions executing at
once (for an example, see Figure 8).
ffl cgstart initiates run-time code generation by allocating
a new code generation region. This new region is
pushed onto the stack of code generation regions and
becomes the "current" region. The cgstart macro is
about as complicated as malloc.
copies the template at label L into the
current code generation region. After execution, the
register r points to the copy of the template, and can
be used to fill holes in the copy. Cgdump is our most
complicated macro: its core is a simple string-copy loop,
but it must also check that the current code generation
region has enough room for a copy of the template. If
there is not enough room, cgdump allocates a new region
twice the size of the old region, copies the contents of
the old region plus the new template to the new region,
and replaces the old region with the new on the region
stack. This is the most complex TAL/T instruction,
consisting of roughly twenty x86 instructions.
ffl cghole r, L template , L hole is a move instruction containing
a hole. It should be used in a template with
label L template , and declares the hole L hole .
ffl cgfill r1, L template , L hole , r2 fills the hole of a tem-
plate; it is a simple move instruction. Register r1 should
point to a copy of the template at label L template , which
should have a hole with label L hole . Register r2 contains
the value to put in the hole.
ffl cgfillrel fills the hole of a template with a pointer into
a second template; like cgfill it expands to a simple
move instruction. It is needed for jumps between templates
int f() -
return(codegen(
int g(int i) -
cut - return 4; -
Figure
9: An example that shows the need for cgabort.
When called, the function f starts generating function g
but aborts in the middle (it returns 4).
ffl cgabort aborts a code generation; it pops the top region
off the region stack. It is needed when the run-time code
generation of a function stops in the middle, as in the
example of Figure 9.
cgend r finalizes the code generation process: the current
region is popped off the region stack and put into
register r. TAL can then jump to location r.
Implementation Status
We now describe some key aspects of our implementation.
As previously mentioned, some components were written
from scratch, while others were realized by modifying existing
software.
4.1 Action-annotated program to Cyclone
We translate Tempo action-annotated programs into run-time
specializers written in Cyclone. Using the Tempo front
end, this lets us automatically generate a Cyclone program
from a C program.
An action-annotated program distinguishes two kinds of
code: normal code that will be executed during specializa-
tion, indicated in italics in Fig. 2; and template code that
will emitted during specialization (non-italicized code). The
annotated C program is translated into a Cyclone program
that uses codegen, cut, splice, and fill. Since italicized
constructs will be executed during code generation, they will
occur outside codegen, or within a cut statement or a fill
expression. Non-italicized constructs will be placed within
a codegen expression or splice statement.
Our algorithm operates in two modes: "normal" mode
translates constructs that should be executed at code generation
time and "template" mode translates constructs that
will be part of a template. The algorithm performs a recursive
descent of the action-annotated abstract syntax, keeping
track of which mode it is in. It starts off in "normal"
mode and produces Cyclone code for the beginning of the
run-time specializer: its arguments (the invariants) and any
local variables and initial statements that are annotated
with italics. When the first non-italic construct is encoun-
tered, a codegen expression is issued, putting the translation
into "template" mode. The rest of the program is translated
as follows.
An italic statement or expression must be translated in
"normal" mode. Therefore, if the translation is in "tem-
plate" mode, we insert cut (if we are processing a statement)
or fill (if we are processing an expression) and switch into
"normal" mode. Similarly, a non-italic statement should be
translated in "template" mode; here we insert splice and
switch modes if necessary. It isn't possible to encounter a
non-italic expression within an italic expression.
Another step needs to be taken during this translation
since specialization is speculative, i.e., both branches of a
conditional statement can be optimistically specialized when
the conditional test itself cannot be evaluated. This means
that during specialization, the store needs to be saved prior
to specializing one branch and restored before specializing
the other branch. Therefore, we must introduce Cyclone
statements to save and restore the store when translating
such a conditional statement. This is the same solution used
by Tempo [6].
4.2 Cyclone to TAL/T
To compile Cyclone to TAL/T, we extended an existing com-
piler, the Popcorn compiler of Morrisett et al. Popcorn is
written in Caml, and it compiles a type safe dialect of C
into TAL, a typed assembly language [16]. Currently, Popcorn
is a very simple, stack based compiler, though it is
being extended with register allocation and more sophisticated
optimizations.
The Popcorn compiler works by performing a traversal
of the abstract syntax tree, emitting TAL code as it goes. It
uses an environment data structure of the following form:
args-on-stack: int -
The environment maintains the execution state of each
function as it is compiled. The field local env contains
each variable identifier and its corresponding stack offset.
Arguments are pushed onto the stack prior to entry to the
function body; the field args on stack records the number
of arguments, so they can be popped off the stack upon
exiting the function.
To compile Cyclone we needed to extend the environment
datatype: first, because Cyclone switches between generating
normal code and template code, and second, because
Cyclone has nested functions. Therefore, we use environments
with the same structure as the environments used in
Cyclone's typing rules:
Outermost of env * (id list)
- Frame of env * cyclone-env
- Hidden of env * cyclone-env
That is, environments are sequences of type frames for func-
tions. A frame can either be outermost, normal, or hidden.
Once we have this type of environment, visible bindings are
defined as they are for E vis in Section 2.
An Outermost frame contains the local environment for a
top-level function as well as the global identifiers. A Frame is
used when compiling template code. A new Frame environment
is created each time codegen is encountered. A Frame
becomes Hidden to switch back to "normal" mode when a
cut or fill is encountered.
Popcorn programs are compiled by traversing the abstract
syntax tree and translating each Popcorn construct
into the appropriate TAL instructions; the resulting sequence
of TAL instructions is the compiled program. Compiling a
Cyclone program, however, is more complicated; it is performed
in two phases. The first phase alternates between
generating normal and template TAL/T instructions and a
second phase rearranges the instructions to put them in their
proper place. In order for the instructions to be rearranged
in the second phase, the first phase interleaves special markers
with the TAL/T instructions:
M-TemplateBeg of id * id
- M-Fill of id * exp
These markers are used to indicate which instructions
are normal, which belong within a template, and which are
used to fill holes. M TemplateBeg takes two arguments, the
beginning and ending label of a template, and is issued at
the beginning of a template (when codegen or splice is
encountered, or cut ends). Similarly, M TemplateEnd is issued
at the end of a template (at the end of a codegen or
splice, or the beginning of a cut). Note that between corresponding
M TemplateBeg and M TemplateEnd markers, other
templates may begin and end. Therefore, these markers can
be nested. When a hole is encountered, a M Fill marker is
issued. The first argument of M Fill is a label for the hole
inside the template. The second argument is the Cyclone
source code expression that should fill the hole.
The following example shows how the cut statement is
compiled.
match stmt of
Cut s -?
match cyclone-env with
Outermost -? raise Error
cg-fill-holes (Hidden(env,cyclone-env'));
emit-mark(M-TemplateBeg(id-new "a", id-new "b"));
Hidden -? raise Error
The function compile stmt takes a Cyclone statement
and an environment, and emits TAL/T instructions as a
side-effect. The first thing to notice is that a cut can only
occur when the compiler is in "template" mode, in which
case the environment begins with Frame. A cut statement
ends a template. Therefore, cg fill holes is called, which
emits a M TemplateEnd marker, and emits TAL/T code to
dump the template and fill its holes. Filling holes must
be done using a "normal" environment, and therefore the
first frame becomes Hidden. Next, compile stmt is called
recursively to compile the statement s within the cut. Since
the statement s should also be compiled in normal mode, it
also keeps the first frame Hidden. Finally, a M TemplateBeg
marker is emitted so that the compilation of any constructs
following the cut will occur within a new template.
The second phase of the code generation uses the markers
to rearrange the code. The TAL/T instructions issued
between a M TemplateBeg and a M TemplateEnd marker are
extracted and made into a template. The remaining, normal
instructions are concatenated to make one function; hole
filling instructions are inserted after the instruction which
dumps the template that contains the hole. The example
in Fig. 4 shows a TAL/T program after the second phase is
completed; the normal code includes instructions to dump
templates and fill holes, and is followed by the templates.
4.3 TAL/T to executable
TAL is translated into assembly code by expanding each
TAL macro into a sequence of x86 instructions. Similarly,
the new TAL/T macros expand into a sequence of x86 and
TAL instructions. A description of each TAL/T macro is
given in Section 3. The resulting x86 assembly language
program is assembled with Microsoft MASM and linked with
the Microsoft Visual C++ linker.
4.4 Initial Impressions
We have implemented our system and have started testing
it on programs to assess its strengths and weaknesses. Since
there is currently a lot of interest in specializing interpreters,
we decided to explore this type of application program. A
state-of-the-art program specializer such as Tempo typically
achieves a speedup between 2 and 20, depending on the
interpreter and program interpreted. To see how our system
compares, we took a bytecode interpreter available in the
Tempo distribution and ran it through our system.
Preliminary results show that Cyclone achieves a speedup
of over 3. This is encouraging, since this is roughly the
speedup Tempo achieves on similar programs. A more precise
comparison of the two systems still needs to be done,
however. On the other hand, in our initial implementation,
the cost of generating code is higher than in Tempo. One
possible reason is that for safety, we allocate our code generation
regions at run time, and perform bounds checks as we
dump templates. The approach taken by Tempo, choosing a
maximum buffer size at compile time and allocating a buffer
of that size, is faster but not safe.
5 Related Work
Propagating types through all stages of a compiler, from
the front end to the back end, has been shown to aid robust
compiler construction: checking type safety after each
stage quickly identifies compiler bugs [23, 24]. Additionally,
Necula and Lee have shown that proving properties at the
assembly language level is useful for safe execution of untrusted
mobile code [18]. So far, this approach has been
taken only for statically generated code. Our system is intended
to achieve these same goals for dynamically generated
code.
Many of the ideas in Cyclone were derived from the
Tempo run-time specializer [7, 12, 13]. We designed Cyclone
and TAL/T with a template-based approach in mind, and
we use the Tempo front end for automatic template identi-
fication. Another run-time specializer, DyC, shares some of
the same features, such as static analyses and a template-like
back end [5, 9, 10]. There are, however, some important
differences between Cyclone and these systems. We have
tried to make our compiler more robust than Tempo and
DyC, by making Cyclone type safe, and by using types to
verify the safety of compiled code. Like Tempo and DyC,
Cyclone can automatically construct specializers, but in ad-
dition, Cyclone also gives the programmer explicit control
over run-time code generation, via the codegen, cut, splice,
and fill constructs. It is even possible for us to hand-tweak
the specializers produced by the Tempo front end with complete
type safety. Like DyC, we can perform optimizations
such as inter-template code motion, since we are writing our
own compiler. Tempo's strategy of using an unmodified, existing
compiler limits the optimizations that it can perform.
ML-box, Meta-ML, and 'C are all systems that add explicit
code generation constructs to existing languages. ML-
box and Meta-ML are type safe dialects of ML [15, 25, 22],
while 'C is an unsafe dialect of C [20, 21]. All three systems
have features for combining code fragments that go beyond
what we provide in Cyclone. For example, in 'C it is possible
to generate functions that have n arguments, where n is a
value computed at run time; this is not possible in Cyclone,
ML-box, or Meta-ML. On the other hand, 'C cannot generate
a function that generates a function; this can be done in
Cyclone (using nested codegens), and also in ML-box and
Meta-ML. An advantage we gain from not having sophisticated
features for manipulating code fragments is simplicity:
for example, the Cyclone type system does not need a new
type for code fragments. The most fundamental difference,
however, is that the overall system we present will provide
type safety not only at the source level, but also at the object
level. This makes our system more robust and makes it
usable in a proof carrying code system.
6 Future Work
In this paper we presented a framework for performing safe
and robust run-time code generation. Our compiler is based
on a simple, stack-based, certifying compiler written by Morrisett
et al. They are extending the compiler with register
allocation and other standard optimizations, and we expect
to merge Cyclone with their improvements.
We are interested in studying template-specific optimiza-
tions. For example, because templates appear explicitly
in TAL/T, we plan to study inter-template optimizations,
such as code motion between templates. Performing inter-
template optimizations is more difficult in a system, like
Tempo, based on an existing compiler that is not aware of
templates.
We are also interested in analyses that could statically
bound the size of the dynamic code generation region. This
would let us allocate exactly the right amount of space when
we begin generating code for a function, and would let us
eliminate bounds checks during template dumps.
We would like to extend the front end of Tempo so that
it takes Cyclone, and not just C, as input. This would mean
extending the analyses of Tempo to handle Cyclone, which is
an n-level language like ML-box. Additionally, we may implement
the analysis of Gl-uck and J-rgensen [8] to produce
n-level Cyclone from C or Cyclone.
7 Conclusion
We have designed a programming language and compiler
that combines dynamic code generation with certified com-
pilation. Our system, Cyclone, has the following features.
Robust dynamic code generation Existing dynamic code
generation systems only prove safety at the source level. Our
approach extends this to object code. This means that bugs
in the compiler that produce unsafe run-time specializers
can be caught at compile time, before the specializer itself
is run. This is extremely helpful because of the complexity of
the analyses and transformations involved in dynamic code
generation.
Flexibility and Safety Cyclone produces dynamic code generators
that exploit run-time invariants to produce optimized
programs. The user interface is flexible, since the
final executable can be generated from a C program, a Cyclone
program, or TAL/T assembly code. Type safety is
statically verified in all three cases.
Safe execution of untrusted, dynamic, mobile code generators
This approach can be used to extend a proof-carrying
code system to include dynamic code generation. Since verification
occurs prior to run time, there is no run-time cost
incurred for the safety guarantees. Sophisticated optimization
techniques can be employed in the certifying compiler.
The resulting system could produce mobile code that is not
only safe, but potentially extremely fast.
Acknowledgements
. We were able to implement Cyclone
quickly because we worked from the existing Tempo and
TAL implementations. We'd like to thank Charles Consel
and the Tempo group, and Greg Morrisett and the TAL
group, for making this possible. The paper was improved
by feedback from Julia Lawall and the anonymous referees.
--R
Partial evaluation in aircraft crew plan- ning
A general approach for run-time specialization and its application to C
Fast binding-time analysis for multi-level specialization
DyC: An expressive annotation-directed dynamic compiler for C
Specializing shaders.
Static Analyses for the Effective Specialization of Realistic Applications.
Accurate binding-time analysis for imperative languages: Flow
A case for runtime code generation.
Lightweight run-time code gener- ation
From System F to typed assembly language.
Fast, optimized Sun RPC using automatic program specialization.
The design and implementation of a certifying compiler.
tcc: A system for fast
Design and Implementation of Code Optimizations for a Type-Directed Compiler for Standard ML
--TR
--CTR
George C. Necula , Peter Lee, The design and implementation of a certifying compiler, ACM SIGPLAN Notices, v.39 n.4, April 2004
Christopher Colby , Peter Lee , George C. Necula , Fred Blau , Mark Plesko , Kenneth Cline, A certifying compiler for Java, ACM SIGPLAN Notices, v.35 n.5, p.95-107, May 2000
Ingo Strmer, Integration Of The Code Generation Approach In The Model-Based Development Process By Means Of Tool Certification, Journal of Integrated Design & Process Science, v.8 n.2, p.1-11, April 2004
Hongxu Cai , Zhong Shao , Alexander Vaynberg, Certified self-modifying code, ACM SIGPLAN Notices, v.42 n.6, June 2007
Scott Thibault , Charles Consel , Julia L. Lawall , Renaud Marlet , Gilles Muller, Static and Dynamic Program Compilation by Interpreter Specialization, Higher-Order and Symbolic Computation, v.13 n.3, p.161-178, Sept. 2000
Cristiano Calcagno , Walid Taha , Liwen Huang , Xavier Leroy, Implementing multi-stage languages using ASTs, Gensym, and reflection, Proceedings of the second international conference on Generative programming and component engineering, p.57-76, September 22-25, 2003, Erfurt, Germany
Simon Helsen, Bisimilarity for the Region Calculus, Higher-Order and Symbolic Computation, v.17 n.4, p.347-394, December 2004 | program specialization;run-time code generation;partial evaluation;certifying compilation;typed assembly language;proof-carrying code |
609226 | A Generic Reification Technique for Object-Oriented Reflective Languages. | Computational reflection is gaining interest in practical applications as witnessed by the use of reflection in the Java programming environment and recent work on reflective middleware. Reflective systems offer many different reflection programming interfaces, the so-called Meta-Object Protocols (MOPs). Their design is subject to a number of constraints relating to, among others, expressive power, efficiency and security properties. Since these constraints are different from one application to another, it would be desirable to easily provide specially-tailored MOPs.In this paper, we present a generic reification technique based on program transformation. It enables the selective reification of arbitrary parts of object-oriented meta-circular interpreters. The reification process is of fine granularity: individual objects of the run-time system can be reified independently. Furthermore, the program transformation can be applied to different interpreter definitions. Each resulting reflective implementation provides a different MOP directly derived from the original interpreter definition. | Introduction
Computational reflection, that is, the possibility of a software system to inspect and modify itself
at runtime, is gaining interest in practical applications: modern software frequently requires strong
adaptability conditions to be met in order to fit a heterogenous and evolving computing environment.
Reflection allows, for instance, host services to be determined dynamically and enables the modification
of interaction protocols at runtime. Concretely, the JAVA programming environment [java] relies
heavily on the use of reflection for the implementation of the JAVABEANS component model and its
remote method invocation mechanism. Furthermore, adaptability is a prime requirement of middle-ware
systems and several groups are therefore doing research on reflective middleware [coi99][bc00].
Reflective systems offer many different reflection programming interfaces, the so-called Meta-Object
Protocols (MOPs) 1 . The design of such a MOP is subject to a number of constraints relating
to, among others, expressive power, efficiency and security properties. For instance, using reflection
Extended version. c
2001 Kluwer Academic Publishers. "Higher-Order and Symbolic Computation", 14(1),
2001, to appear.
We use the term "MOP" in the sense of Kiczales et al. [kic91] (page 1): "Metaobject protocols are interfaces to the
language that give users the ability to incrementally modify the language's behavior and implementation, as well as the
ability to write programs within the language."
for debugging purposes may require the MOP to provide access to the execution stack. However,
because of security concerns stack access must frequently be restricted: in JAVA, for example, it is not
allowed to modify the (untyped) stack because security properties essentially rely on type information.
Since these constraints are different from one application to another, we should be able to provide
a specially-tailored MOP for a particular set of constraints. Moreover, the constraints may change during
the overall software life cycle. Hence, the development of such specially-tailored MOPs should
be a lightweight process. Traditional approaches to the development of MOPs do not meet this goal
instead each of them only provides a specific MOP which can hardly be modified (see the discussion
of related work in Section 9). Consider, for instance, a single-processor application which is to be
distributed. In this case, distinct tasks have to be performed on the message sending side and the
receiving side: for example, on the sender side local calls are replaced by remote ones (instead of
relying on proxies) and on the receiver side incoming messages can be synchronized. Many existing
MOPs do not allow the behavior of message senders to be modified. Hence, such a distribution strategy
cannot be implemented using reflection in these systems. Some systems (see, for instance CODA
[aff95]) provide access to senders right from the start. Therefore, they can introduce an overhead for
local applications.
In this paper, we present a reification mechanism for object-oriented interpreters based on program
transformation techniques. We use a generic transformation which can be applied at compile time to
any class of a non-reflective interpreter definition. This mechanism can be used to transform different
subsets of a metacircular interpreter in order to generate increasingly reflective interpreters. It can
also be applied to different interpreter definitions in order to automatically get different reflective
interpreters. Each resulting reflective implementation provides a different MOP directly derived from
the original interpreter definition.
The paper is structured as follows: in Section 2, we briefly introduce Smith's seminal reflective
towers upon which our work is based and we sketch the architecture of our transformational system.
Section 3 provides an overview of a metacircular interpreter for JAVA. Our generic reification technique
is formally defined and its application to the non-reflective interpreter is exemplified in Section
4. Section 5 is devoted to reflective programming: it details our reification technique at work by
presenting several applications. Section 6 complements Section 4 by presenting a few technicalities
postponed for the sake of readability. Section 7 discusses the correctness of the transformation and
sketches a formal correctness proof. Section 8 illustrates how a refined definition of the non-reflective
interpreter produces a more expressive reflective interpreter. Section 9 discusses related work. Fi-
nally, Section 10 concludes and discusses future work. Code occuring in the paper refers to a freely
available prototype implementation, called METAJ [metaj], which enables execution of the reflective
programming examples we present and provides a platform for experimentation with our technique.
2 Overview of the reification process
In our opinion, Smith' definition of reflection [smi84] remains a key reference because of its clean
semantic foundation and generality. This paper proposes one method to transpose his technique into
the domain of object-oriented languages. In this section, we first introduce Smith-like reflection before
presenting the architecture of our reification method.
Level 0
Level 1
Level 2
Level 3
Interpreter
Program Program
Interpreter
Interpreter Interpreter
Program
Interpreter
Program
Figure
1: Smith-like reflective towers
2.1 Smith-like Reflection
Smith's seminal work on reflective 3-Lisp defines reflection with the notion of reflective towers. In
Figure
1, the left hand side tower shows a user-written (i.e. level 0) Program in a double-square box
and its Interpreter, which defines its operational semantics. A simple classic example of reflective programming
deals with the introduction of debugging traces. Trace generation requires the interpreter
to be modified, that is, two steps have to be performed at runtime: provide an accessible representation
of the current interpreter and change this representation. Such a computation creates an extra
interpretation layer by means of a reification operator "reify," so that the level 1 Interpreter becomes
now part of the program: in the illustration, it is included in the double square box. We get a second
tower with three levels. The Program can now modify the standard semantics of the language defined
by the level 1 Interpreter to get Interpreter' which generates traces during execution (see the third
tower). Finally, when a non-standard semantics Interpreter" of Interpreter' is required, a further extra
interpretation level can be introduced as illustrated by the fourth tower. The fourth tower would be
required, for example, to trace Interpreter'.
On a more abstract level, Smith's reflection model - as well as our reification technique - has
two essential properties: there is a potentially infinite tower of reflective interpreters and the interpreter
at level n interprets the actual code of the interpreter at level n 1.
2.2 Making object-oriented interpreters reflective
In order to get a first intuition of our reification technique, consider the following simple example of
how we intend reflection to be used: color information (represented by the class Color) should be
added to pairs at runtime. Using reflection, we could dynamically modify the inheritance graph such
that Pair inherits from Color. This can be achieved by
// Pair extends Object
// Pair extends Color
where 4 is a reification operator. The application of the reification operator to an expression yields
an accessible representation of the value denoted by the expression. In this example, the expression
Pair denotes the corresponding Class object, say c, in the interpreter's memory (see Figure 7).
4Pair returns an instance (say i), i.e. an object of type Instance, in the interpreter's memory
which represents c and which can be inspected and modified. The default superclass of Pair is
replaced by Color by assigning the field extendsLink. From now on, newly instantiated pairs
contain color information.
It is crucial to our approach that the reified representation i is based on the definition of c. This
Parser
Java.jjt
Runtime System
ExpAssign.java
Reflective Interpreter
Reflective_Prog.java
Java2ExpVisitor.java
Parser
Java.jjt
Runtime System
ExpAssign.java
Non-Reflective Interpreter
Prog.java
Java2ExpVisitor.java
Instance.java
BaseClass.java
Instance.java
generation of reflective interpreter
program
transformation
Figure
2: System architecture
is achieved by the system architecture shown in Figure 2. Our non-reflective JAVA interpreter (repre-
sented by the box at the top) takes a non-reflective program Prog.java as input. This program is
parsed into a syntax tree and evaluated. According to the required reflective capabilities, the language
designer 2 transforms a subset of the classes of the non-reflective interpreter. Basically, this transformation
generates two classes for each original class. In our example, the file Class.java, which
represents classes in the non-reflective interpreter, becomes BaseClass.java and a different version
of Class.java in the reflective one.
The reflective interpreter relies on the non-reflective interpreter in order to build levels of the
reflective tower. This is the core issue of our approach: the tower levels shown in Figure 1 are
effectively built at runtime on the basis of the (verbatim) definition of the non-reflective interpreter as
in Smith's model. This is why the original definition of Class.java is an input of the reflective
interpreter in Figure 2. So, the behavior of the reflective interpreter is derived from the non-reflective
one. Furthermore, our approach is selective and complete because the transformation is applicable to
any class of the non-reflective interpreter definition.
implements one version of this system architecture. Its parser has been implemented
by means of JAVACC and JJTREE (versions 0.8pre2 and 0.3pre6, respectively). METAJ itself is
operational with the JDK versions 1.1.6 and 1.2.
3 A simple non-reflective interpreter
We have implemented a non-reflective metacircular interpreter for a subset of JAVA, which provides
support for all essential object-oriented and imperative features, such as classes, objects, fields, meth-
ods, local variables and assignment statements. (We did not implement features such as some primitive
2 Note that building a reflective interpreter by transformation and writing reflective programs are two different tasks: the
former is performed by language designers and the latter by application programmers.
class ExpId extends Exp {
private String id;
ExpId(String id) {
Data eval(Environment localE) {
return localE.lookup(this.id);
Figure
3: Class ExpId
class ExpAssign extends Exp {
private Exp lhs;
private Exp rhs;
ExpAssign(Exp lhs, Exp rhs) {
Data eval(Environment localE) {
Data
Data
return d2;
Figure
4: Class ExpAssign
types or loop constructs; all of these could be integrated and reified similarly.)
JAVA programs are represented as abstract syntax trees the nodes of which denote JAVA's syntactic
constructs and are implemented by corresponding classes. For example, variables, assignment
statement, method call, and class instantiation expressions are respectively encoded by the classes
ExpId, ExpAssign, ExpMethod and ExpNew. All of these classes define an evaluation method
Data eval(Environment localE) that takes the values of local variables in localE and
returns the value of the expression (wrapped in a Data object).
In particular, ExpId (see Figure 3) holds the name of a variable and its evaluation method yields
the value currently associated to the variable in the local environment. An ExpAssign node (see
Figure
stores the two subexpressions of an assignment. Its evaluation method evaluates the location
of the right-hand side expression, followed by the value represented by the left-hand side expression
and finally performs the assignment. ExpMethod (see Figure 5) represents a method call with a receiver
expression (exp), a method name (methodId) and its argument expressions (args). Method
call evaluation proceeds by evaluating the receiver, constructing an environment from the argument
values, looking up the method definition and applying it. ExpNew (see Figure 6) encodes the class
name (classId) and constructor argument expressions. Its evaluation fetches the class definition
from the global environment, instantiates it and possibly calls the constructor.
As suggested before, the interpreter defines a few other classes to provide a runtime system and
implement an operational semantics. For example, the class Class (see Figure 7) represents classes
by a reference to a superclass (extendsLink), a list of fields (dataList) and a list of methods
(methodList). It provides methods for instantiating the class (instantiate()), accessing the
list of methods including those in super classes (methodList()), etc. Methods are represented by
class ExpMethod extends Exp {
private Exp exp; // receiver
private String methodId; // method name
private ExpList args; // arguments
ExpMethod(Exp exp, String methodId, ExpList args) {
Data eval(Environment localE) {
// evaluate the lhs (receiver)
Instance
// evaluate the arguments to get a new local environment
Environment
// lookup and apply method
return m.apply(argsE, i);
Figure
5: Class ExpMethod
class ExpNew extends Exp {
private String classId; // class name
private ExpList args; // constructor arguments
ExpNew(String classId, ExpList args) {
Data eval(Environment localE) {
// get the Class and create an Instance
Instance
// call non default constructor if it exists
if (i.getInstanceLink().methodList().member(this.classId).booleanValue()) {
Environment
// lookup and apply method
return new
Figure
class Class {
Class extendsLink; // superclass
DataList dataList; // field list
MethodList methodList;
Class(Class eL, DataList dL, MethodList mL) {
Class getExtendsLink() { return this.extendsLink; }
// implementation of Java's 'new' operator
Instance instantiate() { . }
// compute complete method list (incl. superclasses)
MethodList methodList() { . }
Figure
7: Class Class
class Method {
private StringList args; // parameter names
private Exp body; // method body
Method(StringList args, Exp body) {
Data apply(Environment argsE, Instance i) {
// name each argument
argsE.add("this", new
// eval the body definition of the method
return this.body.eval(argsE);
Figure
8: Class Method
the class Method (see Figure 8) by means of a list of argument names (args) and a body expression
(body). Its method apply() binds argument names to values including this and evaluates the
body. Other classes include Instance (contains a reference instanceLink to its class and a
list of field values; provides field lookup and method lookup), MethodList, Data (implements
mutable memory cells such as fields), DataList, Environment (maps identifiers to values), etc.
The architecture of the interpreter follows the standard design for object-oriented interpreters as
presented in Gamma et al. [ghjv95] by the Interpreter design pattern. Instantiating this design pattern,
the following correspondences hold: their 'Client' is our interpreter's main() method, their methods
'Interpret(Context)' is our eval(Environment). The reification technique described in this
paper is applicable to other interpreters having such an architecture. Note that these interpreters may
implement many different runtime systems.
4 Generic reification by code transformation
In this section, we give an overview of our generic reification scheme for the class Class and formally
define the underlying program transformation. (For the sake of readability, we postpone the
discussion of a few technicalities to Section 6.) Then, we apply it in detail to the class Instance.
4.1 Overview of the generic reification scheme
Reification of an object should not change the semantics of that object but change its representation
and provide access to the changed representation. For example, it is not possible to modify the superclass
of a class at runtime in our non-reflective interpreter (although a reference representing the
inheritance relation exists in the memory of the underlying implementation). The reified representation
of a class provides access to this reference. Once the internal representation has been exposed,
access to this structure allows the semantics of the program to be changed (e.g. by means of dynamic
class changes). Note that this form of structural reification of the interpreter memory subsumes the
traditional notions of structural and behavioral reflection.
For illustration purposes, consider a class Pair with two fields fst and snd which is implemented
in the interpreter memory by a Class (from here on, C denotes an instance of the class C).
In order to reify Pair, we choose Class to be reifiable. Basically, a reifiable entity can have two
different representations as exemplified in Figure 9: either a base representation or a reified represen-
tation. Since reification of any object does not change its behavior, the object should provide the same
method interface in both representations. This common interface is implemented using a dispatch
object 3 : the Class denoted by Pair.
The dispatch object points to the currently active representation: either the base representation
(BaseClass in Figure 9a) or the reified representation (Instance denoted by 4Pair in Figure
9b). The dispatch object provides a method reify() (triggered by 4) to switch from the base
representation to the reified one: a call to reify() creates a new tower level. The dispatch object
executes incoming method calls according to the active representation: when the base representation
is active, the dispatch simply delegates incoming method calls to it. When the reified representation
is active, the dispatch object interprets the method call.
Whether an object is accessed through its dispatch object or through its reified representation is
irrelevant, that is, modification of the object through the access path Pair is visible through the other
access path 4Pair. (This property is commonly referred to as the causal connection between levels.)
3 The dispatch technique is close to the bridge and state patterns introduced in Gamma et al. [ghjv95].
denotes
methodList
dispatch object
Class
Instance
dataList
methodList, extendLink
active representation
instance of
a) before D b) after D
Pair Pair
Pair Pair Pair
different
representations
Figure
9: Before and after reification of the class Pair
Obviously, the two paths provide different interfaces. Consider, for example, the problem of keeping
track of the number of Pair instances using a static field countInstances: this field could be accessed
either by Pair.countInstances or by (4(4Pair).staticDataList).lookup
4 . In the last expression, the outermost reification operation is necessary in
order to call lookup() on a data list object (cf. the fourth item below).
In order to conclude this overview, we briefly mention other important properties of our reification
scheme:
Since reflection provides objects representing internal structure for use in user-level programs,
every reification operation returns an Instance (e.g. the one in Figure 9b). This implies that
reification of reified entities requires that Instances are reifiable.
4Exp yields an accessible representation of the value denoted by Exp (i.e. an object in the
interpreter's memory, such as Class, Instance, Method) 5 .
References from dispatch objects to their active representations cannot be accessed by user
programs. Only a call to the reification operator may modify these references. This ensures that
the tower structure cannot be messed up by user programs.
The scope of the reification process is limited to individual objects in the interpreter's memory.
For example, the reification of a class does not reify its list of methods methodList nor its
superclass. So, three categories of objects coexist at runtime: reified objects, non-reified (but
4 METAJ does not allow static fields but could be extended easily to deal with such examples.
5 4 is a strict operator. A syntax extension would be necessary to reify the expression (e.g. the AST representing 1+4)
rather than the value denoted by the expression (e.g. the integer 5).
class Name {
Type f1 field f1 ;
Type fn field fn ;
Name(Type f1 arg f1 , ., Type fn arg fn ) { body }
Type m1 method m1 (Type m11 arg m11 , ., Type m 1k arg m 1k ) { body m1 }
Figure
10: Original class definition
reifiable) ones and non-reifiable ones. If a program accesses an object o through a reified one,
the use of restricted exactly as in the non-reflective case. 4Pair.extendsLink, for
example, references a Class representing the superclass of Pair. Therefore, the only valid
operations on this reference are new (4Pair.extendsLink)() 6 as well as accesses to
static fields and members of this class. If the structure or behavior of the superclass is to be
changed, it must be reified first. This implies that accesses to non-reifiable objects through
reified ones are safe.
4.2 Formal definition of the generic reification scheme
Based on the implementation technique outlined above, our generic reification scheme is an automatic
program transformation which can be applied to an arbitrary class, called Name in the following
definition, of the original interpreter. As shown in Figure 10, such classes consist of a number of fields
and methods and must have a constructor with arguments for all of their fields. The transformation of
a set of classes has time and space complexity linear in the number of classes.
The transformation consists of two main steps:
1. Introduce the class BaseName (see Figure 11) which defines the base representation of the
original class Name. This class is very similar to the original class Name.
2. Redefine the class Name (see Figure 12) such that it implements the corresponding dispatch
object. This class provides the same method interface as the original class Name and implements
a method reify() which creates the reified representation and switches from the base
representation to the reified one.
Figure
11 shows the generated base class. (In the figures of this section, we use different style
conventions for verbatim text, schema variables and [text substitutions].) Ba-
sically, the original class is renamed and a field referent is added. Remember that a reifiable entity
is implemented by a dispatch object that points to the current representation. The referent field,
which is initialized in the constructor and points back from the representation to the dispatch object,
is mandatory to distinguish the dispatch object and the representation: if this is not used to access
6 The current parser of METAJ does not allow such an expression: new requires a class identifier. However, the parser
could be easily extended to deal with such expressions and we allow this notation in this paper.
class BaseName {
Type f1 field f1 ;
Type fn field fn ;
Name referent;
BaseName(Type f1 arg f1 , ., Type fn arg fn , Name referent) {
body
Type m1 method m1 (Type m11 arg m11 , ., Type m 1k arg m 1k ) {
Figure
Generated base class
fields or methods in the base class it should denote the dispatch object 7 . In the transformation, this
is implemented by substituting this(~.) (matching the keyword this followed by anything but a
dot) by this.referent.
The generated dispatch class, shown in Figure 12, has two fields: representation that points
to either the base representation or the reified representation, and a boolean field isReified that
discriminates the active representation. Its constructor creates a base representation for the object.
The methods method m i have the same signature as their original version. When the base representation
is active (i.e. isReified is false), the method call is delegated to the base representation.
When the reified representation is active (i.e. isReified is true), the method call is interpreted:
the corresponding call expression is parsed (Parser.java2Exp()), a local environment is built
(argsE.add()) from the method arguments and the field representation of the dispatch object
and the method call is evaluated (eval()). Note that for the sake of clarity, this code is intentionally
naive. The actual implemented version could be optimized: for example, the call to the parser
could be replaced by the corresponding syntax tree.
The method reify() builds a reified representation of the base representation by evaluating a
new-expression. The corresponding class is cloned in order to build a new tower level. So, every
reified object has its own copy of a Class. This way, the behavior of each reified object can be specialized
independently. If sharing is required the application programmer can achieve it by explicitly
manipulating references. Finally, the reified representation is installed as the current representation
and a reference to it is returned. A series of experiments led us to this sharing strategy. A previous
version of the transformation did not clone the class. This sharing led to cycling dependency relationships
and reflective overlap after reification: in particular, reification of the class Class introduced
7 This is a typical problem of wrapper-based techniques that introduce two different identities for an object.
class Name {
Object representation;
boolean isReified;
Name(Type f1 arg f1 , ., Type fn arg fn ) {
new BaseName(arg f1 , ., arg fn , this);
Type m1 method m1 (Type m11 arg m11 , ., Type m 1k arg m 1k ) {
if (this.isReified) {
"reifiedRep.method m1 (arg m11 ,.,arg m 1k )");
Environment
argsE.add("reifiedRep", this.representation);
argsE.add("arg m11 ", arg m11 );
Data
return result.read();
else
return (BaseName)
this.representation.method m1 (arg m11 , . , arg m 1k );
Instance reify() {
if (!this.isReified) {
baseRep_field fn ,
Environment
argsE.add("baseRep_field f1 ", this.representation.field f1 );
argsE.add("baseRep_field fn ", this.representation.field fn );
argsE.add("aClass", aClass);
return (Instance)this.representation;
Figure
12: Generated dispatch class
non-termination. Alternatively, we experimented with one copy of each class per level but in this case
the reification (without modification) of an object could already change its behavior.
This generic reification technique is based on only two assumptions:
1. Each syntactic construct is represented by an appropriate expression during interpreter execu-
tion. We assume that all of these expressions can be evaluated using the method eval(argsE)
where argsE contains the current environment, i.e. the values of the free variables in the current
expression.
2. We assume that the textual definitions of all reifiable classes have been parsed at interpreter creation
time and that they are stored as Class objects in the global environment Main.globalE.
These objects have to be cloneable.
This way, reify() creates an extra interpreter layer based on the actual interpreter definition.
Note that these simple assumptions and the formal definition enable the transformation to be performed
automatically.
In Java, the operator new returns an object (i.e. an Instance). Therefore, in order to let the user
build other runtime entities than Instances, such as Classes and Methods, we provide a family
of deification 8 operators, one for each of these entities. These operators are the inverse of the generic
reification operator. For example, in the reflective program (where r Class denotes the deification
operator for classes):
the right-hand side expression returns a Class dispatch object in front of the Instance created by
new. Note that the deification operators - while functionally inverting the reification operation -
do not change the representation of an object "back" to its unreified structure (e.g. to a BaseClass
in the case of classes).
The dispatch objects engender the structure of the reflective tower; their implementation is not
accessible to the user. In particular, the reification operator and the deification operators encapsulate
the fields representation and isReified of dispatch objects as well as the field referent
from the base class. So, user programs cannot arbitrarily change the tower structure. However, the
user or a type system to be developed should avoid the creation of meaningless structures, such as
r Class (new Method(.
4.3 Example: making the class Instance reifiable
To illustrate the definition of the transformation, we apply it to the class Instance (see Figure 14),
which is used in the examples of reflective programming in the next section. This class implements
objects in the interpreter. For example, a pair object with two fields fst and snd is implemented
by an Instance the field dataList of which contains two memory cells labelled fst and snd.
Its field instanceLink points to a Class containing the methods of the class Pair. The method
lookupData() is called whenever a field of pair is accessed. (For the sake of conciseness, we
did not show the other methods of Instance, such as lookupMethod().)
The application of the transformation defined above to Instance yields the two classes Ba-
seInstance (see Figure 15) and the dispatch class Instance (see Figure 16). Now, pair is
implemented by a dispatching Instance as shown in Figure 13. Its default unreified representation
is a BaseInstance (say b 1
whose dataList field contains the fields labelled fst and snd (see
8 We prefer the term 'deification' [iyl95] to the equivalent terms `reflection' [wf88] and 'absorption' [meu98].
dispatch object
Instance
dispatch object
Instance
denotes
active representation
instance of
a) before b) after
BaseInstance
dataList
instanceLink
Instance
different
representations
dispatch object
Instance
Figure
13: Before and after reification of the object pair
class Instance {
public Class instanceLink; // ref. to Class
public DataList dataList; // field list
Instance(Class instanceLink, DataList dataList) {
// field access
Data lookupData(String name) {
return this.dataList.lookup(name);
Figure
14: Original class Instance
class BaseInstance {
Class instanceLink;
DataList dataList;
Instance referent;
BaseInstance (Class instanceLink, DataList dataList,
Instance referent) {
Data lookupData(String name) {
return this.dataList.lookup(name);
Figure
15: Class BaseInstance
Figure
13a). Once pair has been reified (see Figure 13b), it is represented by an Instance which
points to a BaseInstance (say b 2
). Note that in contrast to the reification of classes shown in Figure
9, the reified representation of an instance is reifiable (because it is an instance itself; hence, the
second dispatching Instance in Figure 13b). Since the reification is based on the actual definition of
the original Instance, the dataList of b 2
contains the three fields instanceLink,dataList
(itself containing fst and snd) and referent. The definition of the method lookupData() in
the dispatch object calls the method lookupData()of b 1
as long as pair is not reified. Once it is
reified, the definition of lookupData() of Instance is interpreted.
In order to prove the feasibility of our approach, we applied this reification technique to different
classes defining object-oriented features of our JAVA interpreter resulting in the prototype METAJ. The
imperative features of the non-reflective interpreter can be tackled analogously. This way we could,
for example, redefine the sequentialization operator ';' in order to count the number of execution steps
in a given method (say m). One way to achieve this is by reification of occurrences of ExpS in reified
m and dynamically changing their classes by a class performing profiling within the eval() method.
Another solution would be to replace ExpS nodes in reified m by nodes including profiling.
Reflective Programming
In this section, we express several classic examples of reflective programming in our framework.
These detailed examples of our reflective interpreter at work should help the reader's understanding
of the system's working.
The examples highlight an important feature of our design: since our reification scheme relies on
the original interpreter definition, the meta-object protocol of the corresponding reflective interpreter
(i.e. the interface of a reflective system) is quite easy to apprehend. It consists of a few classes which
are reifiable in METAJ, the reification operator 4 and the deification operators r .
In
Figure
17 the class Pair is defined, and in main() a new instance pair is created. In the
interpreter, the object pair is represented by an Instance (see Figure 13a). Our generic reification
method provides access to a representation of this Instance which we name metaPair (denoted
by 4pair in Figure 13b). The most basic use of reflection in object-oriented languages consists in
class Instance {
Object representation;
boolean isReified;
Instance(Class instanceLink, DataList dataList) {
new BaseInstance(instanceLink, dataList, this);
Data lookupData(String name) {
if (this.isReified) {
// interpret lookup method call
// pass already evaluated values
Environment
argsE.add("name", name);
argsE.add("reifiedRep", this.representation);
Data
// unpack result
return (Data)result.read();
else
return ((BaseInstance)this.representation).lookupData(name);
Data reify() {
if (!this.isReified) {
// copy the base class BaseInstance
// create and initialize new representation
Environment
argsE.add("baseRep_instanceLink", this.representation.instanceLink);
argsE.add("baseRep_dataList", this.representation.dataList);
argsE.add("aClass", aClass);
return new
Figure
Dispatch class Instance
class Pair {
String fst;
String snd;
Pair(String fst, String snd) {
class PrintablePair extends Pair {
String toString() {
return "(" this.fst
class InstanceWithTrace extends Instance {
Method lookupMethod(String name) {
// trace method-called
System.out.println("method
return this.instanceLink.methodList().lookup(name);
class Main {
void main() {
Pair new Pair("1", "2");
invariance under reification
Instance
test existence of a super class
if (metaClass.getExtendsLink() == null)
System.out.println("Class Pair has no superclass");
class change
method-call semantics
Instance
metaMetaPair.setInstanceLink(InstanceWithTrace);
instance and class deification
System.out.println((r InstancemetaPair).fst);
Figure
17: Examples of Reflective Programming
reifying an object: changing the internal representation without modifying its behavior (see Example
1). Another simple use is introspection. Let us consider the problem of testing the existence of a super
class of a given class. In Example 2, the class Pair (represented by a Class in the interpreter) is
reified which enables its method getExtendsLink() to be called.
In METAJ, reflective programming is not limited to introspection, but the internal state of the
interpreter can also be modified (aka intercession). The third example in main() shows how the
behavior of an instance can be modified by changing its class dynamically. Imagine that we would
like to print pairs using a method called toString(). We define a class PrintablePair which
extends the original class Pair and implements a method toString(). A pair can then be made
printable by dynamically changing its class from Pair to PrintablePair (remember that the
field instanceLink of Instance holds the class of the represented instance, see Figure 15).
Afterwards the object pair understands the method toString().
The fourth example deals with method call tracing for debugging purposes. The class Instance
of the interpreter defines the method Method lookupMethod(String name) that returns the
effective method to be called within the inheritance hierarchy. In our interpreter each lookup-
is followed by an apply(). Thus, method call tracing can be introduced by defining
a class InstanceWithTrace which specializes the class Instance of the interpreter such that
its method lookupMethod() prints the name of its parameter. In order to install the tracing of
method calls of the instance pair, its standard behavior defined in the interpreter by the class Instance
(note that this class can be accessed because the interpreter definition is an integral part of
the reflective system built on top of the reflective interpreter) is replaced by InstanceWithTrace.
Reification of pair provides access to an Instance whose field instanceLink denotes the class
Pair. A sequence of two reification operations on pair provides access to an Instance whose
instanceLink denotes the class Instance. This link can then be set to the class Instance-
WithTrace. A method call of the object pair then prints the name of the method. Therefore,
"toString" is printed by our third example. Finally, note that our tower-based reflection scheme
makes it easy to trace the tracing code if required because any number of levels may be created by a
sequence of calls to 4.
The fifth (rather artificial) example illustrates deification by deifying metaPair and meta-Class
in order to create an instance and a class at the base level. After deification of the reified
representation metaPair we show that base-level operations can be performed on the resulting ob-
ject. In the case of class deification, we restore the original class of pair.
More advanced examples that illustrate our approach rely on the capacity to reify arbitrary parts
of the underlying interpreter. As discussed in Section 4.3, the reification of ExpS allows the behavior
of the sequence operator ';' to be changed. This way, we could, for instance, stop program execution
at every statement for debugging purposes or handle numeric overflow exceptions by re-executing
the current statement block with higher-precision data representations. Furthermore, reification of the
control stack would allow Java's try/catch-mecanism for exception handling to be extended by a retry
variant.
6 The nuts and bolts of generic reification
Section 4 presents the essential parts of the generic reification mechanism. However, the actual implementation
of a full-fledged reflective system requires several intricacies to be handled. In the current
section, we motivate the problems which must be handled and sketch the solution we developed. For
an in-depth understanding of these technicalities we refer the reader to the METAJ source code.
class ExpId extends Exp {
// same fields and constructor as in
Data eval_original(Environment localE) {
// same definition as eval in
Data eval(Environment localE) {
if (!localE.member("#meta_level").booleanValue())
return this.eval_original(localE);
else {
if (this.id.equals("this"))
return new Data(((Instance) localE.lookup("this").read()).referent);
else return eval_original(localE);
Figure
class ExpMethod extends Exp {
Data eval(Environment localE) {
Instance
if (!localE.member("#meta_level").booleanValue())
return this.eval_original(localE);
else {
// evaluate the lhs (object part)
Object
if (o instanceof Reifiable && ((Reifiable) o).getIsReified()) {
// evaluate the receiver
// evaluate the arguments to get a new local environment
Environment new Environment(null, null, null);
argsE.add("#meta_level", new
// lookup the method and apply it
return m.apply(argsE, i);
} else {
if ((o instanceof DataList) && this.methodId.equals(lookup)) {
Environment
return new Data(((DataList) o)
} else . // other delegation cases
Figure
19: Class ExpMethod
First, in the reflective interpreter a reified object is represented by a dispatch object and a reified
representation. So, basically a reified object has two different identities. With our technique, this is
bound to the representation rather than the dispatch object by parsing the expression "reifiedRep.
method m1 (arg m11 ,\dots,arg m 1k )" in the dispatch object (see Figure 12). However, if a statement
return this is to be interpreted, this should denote the dispatch object. Otherwise, user-level
programs could expose the reified representations. The interpreter class ExpId is in charge
of identifier evaluation (including this) and has therefore to be modified to account for this be-
havior. In Figure 18 the method eval() distinguishes two cases by means of the environment-tag
#meta_level. 9 First, interpretation has been initiated by the interpreter's entry point and non-reflective
evaluation is necessary. Second, interpretation has been initiated by a dispatch object and
reflective interpretation is required. In the first case eval_original() is called: this method has
the same definition as eval() in the non-reflective interpreter. In the second case if the identifier is
this, the dispatch object of the current representation is returned. Remember that the field referent
points back from the base representation to the dispatch object, the same mechanism is used to
link the reified representation to the dispatch object. This field must be set by the methods reify(),
so the class Instance has to provide such a field 10 .
Second, remember that the scope of reification is limited to a single object in the interpreter
memory. This means interpretation involves reified and non-reified objects. For example, the reification
of an Instance does not reify neither its field list dataList nor its class denoted by in-
stanceLink. In particular, once an Instance has been reified, the interpretation of its method
lookupData (repeated from Figure 14):
Data lookupData(String name){return this.dataList.lookup(name);}
requires this.dataList to be interpreted and the call lookup(name) to be delegated because
this.dataList denotes a non-reifiable object. In abstract terms, a dispatch object introduces an
interpretation layer (a call to eval()) and this layer has to be eliminated when the scope of the
current (reified) object is left. This scheme is implemented in ExpMethod.eval() (see Figure
19).
Because of these two problems, the methods ExpData.eval() and ExpNew.eval() have
to be modified similarly. This means that our reification scheme cannot be applied to the four classes
ExpId, ExpMethod, ExpData, ExpNew 11 . However, our method provides much expressive
power: these restrictions fix the relationship between certain syntactic constructs and the runtime
system, but the runtime mechanisms themselves can still be modified as exemplified in Section 5. In
order to weaken this restriction, we designed and implemented a variant 12 of our reification scheme
that does not require ExpId and ExpData to be modified. Unfortunately, this advantage comes at a
price: the field referent can be exposed and modified by reification in this case.
7 Discussion of the correctness of the transformation
A complete treatment of the correctness of our technique is beyond the scope of this paper. However,
in this section we discuss very briefly work related to semantics of reflective systems and sketch a few
essential properties constituting a skeleton for a formal correctness proof of our technique.
9 The dispatch objects insert this tag into the local environment.
10 For the sake of simplicity, the code shown in Figures 12 and 16 does not mention the field referent.
11 The restriction that all parts of a reflective system cannot be reified seem to be inherent to reflection [wf88].
This variant is also bundled in the METAJ distribution.
Semantics of reflective programming systems is a complex research domain. Almost all of the
existing body of research work in this domain is about reflection in functional programming languages
[wf88][dm88][mul92][mf93]. Even in this context, foundational problems still exist. For example, it
seems impossible to give a clean semantics which avoids introducing non-reifiable components [wf88]
and logics of programming languages must be considerably weakened in order to obtain a consistent
theory of reification [mul92]. One of the very few formal studies of reflection in a non-functional
setting has been done by Malenfant et al. [mdc96]. This work deals with reflection in prototype-based
languages and focuses on the lookup() ; apply() MOP formalized by means of rewriting
systems. This approach is thus too restricted to serve as a basis for our correctness concerns. In
general, semantic accounts of imperative languages are more difficult to define than in the functional
case. In particular, the transposition of the results obtained in the functional case to our approach
requires further work. We anticipate that this should be simpler in a transformational setting such as
ours than for arbitrary reflective imperative systems.
In order to prove the correctness of our scheme, the basic property to satisfy would be equivalence
between a non-reflective interpreter I nr and a reflective interpreter generated by applying our
transformation to I nr , i.e.
Since the transformation Tr is operating on individual classes, this property can be tackled by
establishing an equivalence between an arbitrary class (say c) of the non-reflective interpreter and its
transformed counterpart. Essentially, the transformation introduces an extra interpretation layer into
the evaluation of the methods of c. Programs and their interpretations introduced by transformation
satisfy the property
This property can be proven by induction on the structure of the AST representation of p. (Note
that the formulation of this property is intentionally simplistic and should be parameterized with
contextual information, such as a global environment and a store.) It can be applied to the dispatch
classes (see Figure 12) to fold interpreting code into delegating code. When the then-branches of
dispatching methods are rewritten using the property from left to right, the then-branches equal the
corresponding else-branches. Henceforth, the conditionals become useless and the dispatch objects
become simple indirections that can be suppressed. In the case of the method reify(), the rewriting
leads to the expression new Name(.) that creates a copy of the non-reified representation.
Finally, we strongly believe our transformation is type-safe (although we did not formally prove
every well-typed interpreter is transformed into a well-typed reflective interpreter. Obviously,
wrongly-typed user programs may crash the non-reflective interpreter. In the same way, some reflective
programs may crash the reflective interpreter, for instance by confusing reflective levels or trying
to access a field which has been previously suppressed using intercession. Specialized type systems
and static analysis methods for safe reflective programming should be developed.
Generating alternative metaobject protocols
We have already mentioned that each set of reified classes along with their definitions determines a
MOP of its own. We think that this is a key property of our approach because it provides a basis for the
systematic development of specially-tailored MOPs. In this section, we modify the message-sending
part of the non-reflective interpreter in order to provide a finer-grained MOP which distinguishes the
sender and the receiver of a message.
class Instance {
// add two new methods
Data send(Msg msg) {
return msg.to.receive(msg);
Data receive(Msg msg) {
return msg.to.lookupMethod(msg.methodId)
class ExpMethod extends Exp {
Data eval(Environment localE) {
// as before evaluate receiver and arguments: o, argsE
// new code: determine sender, build and send message
Instance
new Msg(self, o, this.methodId, argsE);
return self.send(msg);
Figure
20: Alternative original interpreter
class InstanceWithSenderTrace extends Instance {
Data send(Msg msg) {
System.out.println("method called
return
Figure
21: (User-defined) extension of Instance
In the original interpreter, ExpMethod.eval() evaluates a method call by implementing the
composition lookupMethod();apply(). So, the behavior of the receiver of a method call can
be modified easily by changing the definition of lookupMethod() (as illustrated by trace insertion
in the Section 5). However, a modification concerning the sender of the method call (see CODA
[aff95] for a motivation of making the sender explicit in the context of distributed programming) is
much more difficult to implement. Such a change would require the modification of all instances of
ExpMethod in the abstract syntax tree, i.e. all occurrences of the operator '. Indeed, we have to
check whether the object this in such contexts has a non-standard behavior.
A solution to this problem is to modify the non-reflective interpreter, such that its reflective version
provides a MOP enabling explicit access to the sender in a method call. Intuitively, we split message
sending in two parts: the sender side and the receiver side. First, we introduce a new class Msg which
is a four-tuple. For each method call, it contains the sender from, the receiver to, the method name
methodId and the corresponding argument values argsE. Then, two methods dealing with messages
are added to the definition of Instance in the original interpreter: send() and receive()
(see
Figure
20). Finally, ExpMethod.eval() is redefined such that it creates and sends a message
to the receiver.
This new version of the non-reflective interpreter is made reflective by applying our program
transformation. Then, the user can, for example, introduce tracing for message senders (see Figure
21), the same way traces have been introduced in the previous section.
This example highlights three advantages of our approach: MOPs are precisely defined, application
programmers are provided with the minimal MOPs tailored to their needs and language designers
can extend MOPs at compile time without anticipation of these changes.
9 Related work
A comparison between reflective systems is inherently difficult because of the wide variety and the
conceptual complexity of reflective models and implementations. For example, the detailed definition
of the CLOS MOP requires a book [kic91] and a thorough comparison between CLOS and
already fills a book chapter [coi93].
Consequently, we restrict our comparison to the three basic properties our reflection model obeys
(the first and second characterizing Smith-like approaches, the third being fundamental to our goal of
the construction of specially-tailored MOPs):
1. (tower) There is a potentially infinite tower of reflective interpreters.
2. (interpreter) The interpreter at level n interprets the code of the interpreter at level n 1.
3. (selectivity & completeness) Any part of the runtime system and almost all of the syntax tree
(see Section 6) of an interpreter at level n can be reified and has an accessible representation at
level
First, most reflective systems are based on some notion of reflective towers and provide a potentially
infinite number of levels. A notable exception to this are OPEN-C++ [chi95] and IGUANA [gc96]
whose MOPs only provide one metalevel.
Second, our approach is semantics-based following Smith's seminal work on reflective 3-LISP
[smi84] for functional languages. This is also the case for the prototype-based languages 3-KRS
[mae87] and AGORA [meu98]. The other object-oriented approaches to reflection (including OBJ-
VLISP [coi87], SMALLTALK [bri89] [riv96], CLASSTALK [bri89], CLOS [kic91], MetaXa [gol97])
are not semantics-based (in the sense of the second property cited above) because they do not feed
higher-level interpreters with the code of lower-level interpreters. Instead, different levels are represented
by appropriate pointer structures. This proceeding allows more efficient implementations but
has no semantic foundation. Moreover, these reflective languages are monolithic entities while our
modular approach consists of three simple parts: a non-reflective interpreter, the operator 4 and the
operators r .
Third, our approach enables language designers to precisely select which mechanisms of the language
are reflective. With the exception of IGUANA and OPEN-C++, all the reflective systems cited
above do not have this characteristic. Finally, note that our approach shares a general notion of completeness
with 3-LISP, 3-KRS and AGORA: the programming model is defined by the interpreter
and almost all of its features can be made reifiable ("up" and "down" are primitives in 3-LISP and
cannot be reified, for instance). Asai et al. [amy96] also starts from such a complete model but this
interesting approach to reflection in functional languages restricts reifiable entities in order to allow
optimization by partial evaluation. In contrast, the remaining reflective systems described above do
not base reflection on features of an underlying interpreter but implement an ad hoc MOP. The notion
of completeness therefore does not make sense for them.
Conclusion and future work
In this paper we have presented a program transformation technique to generate reflective object
oriented interpreters from non-reflective ones. This technique allows specially-tailored MOPs to be
produced quickly. New MOPs can be developed from scratch or by refinement from existing ones
as exemplified in Section 8. Compared to general MOPs, specially-tailored ones could be tuned, for
instance, towards better efficiency and security properties.
To the best of our knowledge, the resulting framework for reflective object-oriented languages is
the first one satisfying the three basic properties mentioned in Section 9. Consequently, our approach
cleanly distinguishes between reifiable and non-reifiable entities, thus helping the understanding of
reflective programs.
A prototype implementation, called METAJ [metaj], is available.
Future work. We presented a generic reification technique for object-oriented reflective languages,
which provides a basis for the exploration of the metaprogramming design space, optimization techniques
and the formalization of reflective systems.
First, at the system level the design space of MOPs should be explored by defining and refining
different non-reflective interpreters as exemplified in Section 8, yielding a taxonomy of reflective
mechanisms. At the user level, the proliferation of reflective dialects requires appropriate design and
programming tools, including libraries of user-friendly reflective operators, program analyses and
type systems.
Second, reflection is deeply related to interpretation. Each dispatch object introduces a new interpretation
layer by calling the method eval(). So, specialization techniques like partial evaluation
[bn00] are prime candidates for efficiency improvements. Furthermore, user-written reflective programs
may not use all reflective capabilities provided by a reflective interpreter (e.g. only make use
of a bound number of reflective levels). In this case, optimization techniques such as that presented
by Asai et al. [amy96] could be used to merge interpretation levels.
Third, since reflective programming is a rather complex task, it should be based on a formal
semantics, e.g. to define and ensure security properties. We believe that our transformation could be
used to generate specially-tailored reflective semantics from a non-reflective one.
Finally, we firmly believe that our reification technique can also be applied to (parts of) applications
instead of an interpreter in order to make them reflective (preliminary results can be found in a
related paper by the authors [ds00]).
Acknowledgements
. We thank the anonymous referees for their numerous constructive comments
and the editor Olivier Danvy. The work reported here has also benefited from remarks by Kris de
Volder, Shigeru Chiba and Jan Vitek. It has been improved through many discussions with our colleagues
Noury Bouraqadi, Mathias Braux and Thomas Ledoux.
--R
Duplication and Partial Evaluation - For a Better Understanding of Reflective Languages
Programming with Explicit Metaclasses in SMALLTALK.
A Metaobject Protocol for C
are First Class Objects: the OBJVLISP Model.
"Object-Oriented Programming: The CLOS perspectives?"
Intensions and Extensions in a Reflective Tower.
On the lightweight and selective introduction of reflective capabilities in applications.
Design Patterns.
Design and Implementation of a Meta Architecture for Java.
Using Meta-Objects to Support Optimisation in the Apertos Operating System
Sun Microsystems
The Art of the Metaobject Protocol.
Concepts and Experiments in Computational Reflection.
A Semantics of Introspection in a Reflective Prototype-Based Language
Towards a Theory of Reflective Programming Languages.
http://www.
"Prototype-based Programming"
M-LISP: A Representation-Independant Dialect of LISP with Reduction Seman- tics
SMALLTALK: a Reflective Language.
Reflection and Semantics in LISP.
The Mystery of the Tower Revealed: A Non-Reflective Description of the Reflective Tower
--TR
--CTR
Gregory T. Sullivan, Aspect-oriented programming using reflection and metaobject protocols, Communications of the ACM, v.44 n.10, p.95-97, Oct. 2001
Manuel Clavel , Jos Meseguer , Miguel Palomino, Reflection in membership equational logic, many-sorted equational logic, Horn logic with equality, and rewriting logic, Theoretical Computer Science, v.373 n.1-2, p.70-91, March, 2007 | reflection;language implementation;OO languages;program transformation |
609228 | A Per Model of Secure Information Flow in Sequential Programs. | This paper proposes an extensional semantics-based formal specification of secure information-flow properties in sequential programs based on representing degrees of security by partial equivalence relations (pers). The specification clarifies and unifies a number of specific correctness arguments in the literature and connections to other forms of program analysis. The approach is inspired by (and in the deterministic case equivalent to) the use of partial equivalence relations in specifying binding-time analysis, and is thus able to specify security properties of higher-order functions and partially confidential data. We also show how the per approach can handle nondeterminism for a first-order language, by using powerdomain semantics and show how probabilistic security properties can be formalised by using probabilistic powerdomain semantics. We illustrate the usefulness of the compositional nature of the security specifications by presenting a straightforward correctness proof for a simple type-based security analysis. | Introduction
1.1 Motivation
You have received a program from an untrusted
source. Let us call it company M. M promises to
help you to optimise your personal financial invest-
ments, information about which you have stored in
a database on your home computer. The software
limited time), under the condition that
you permit a log-file containing a summary of your
usage of the program to be automatically emailed
back to the developers of the program (who claim
they wish to determine the most commonly used
features of their tool). Is such a program safe
to use? The program must be allowed access to
your personal investment information, and is allowed
to send information, via the log-file, back to
M. But how can you be sure that M is not obtaining
your sensitive private financial information
by cunningly encoding it in the contents of the
innocent-looking log-file? This is an example of
the problem of determining that the program has
secure information flow. Information about your
sensitive "high-security" data should not be able
to propagate to the "low-security" output (the log-
file). Traditional methods of access control are of
limited use here since the program has legitimate
access to the database.
This paper proposes an extensional semantics-based
formal specification of secure information-flow
properties in sequential programs based on
representing degrees of security by partial equiv-
Department of Computer Science, Chalmers University
of Technology and the University of G-oteborg,
fandrei,daveg@cs.chalmers.se
alence relations 1 . The specification clarifies and
unifies a number of specific correctness arguments
in the literature, and connections to other forms
of program analysis. The approach is inspired
by (and equivalent to) the use of partial equivalence
relations in specifying binding-time analysis
[HS91], and is thus able to specify security properties
of higher order functions and "partially confidential
data" (e.g. one's financial database could
be deemed to be partially confidential if the number
of entries is not deemed to be confidential even
though the entries themselves are). We show how
the approach can be extended to handle nondeter-
minism, and illustrate how the various choices of
powerdomain semantics affects the kinds of security
properties that can be expressed, ranging from
termination-insensitive properties (corresponding
to the use of the Hoare (partial correctness) pow-
erdomain) to probabilistic security properties, obtained
when one uses a probabilistic powerdomain.
1.2 Background
The study of information flow in the context of systems
with multiple levels of confidentiality was pioneered
by Denning [Den76, DD77] in an extension
of Bell and LaPadula's early work [BL76]. Den-
ning's approach is to apply a static analysis suitable
for inclusion into a compiler. The basic idea
is that security levels are represented as a lattice
(for example the two point lattice PublicDomain -
TopSecret ). The aim of the static analysis is to
ensure that information from inputs, variables or
processes of a given security level only flows to out-
Equivalence relation is symmetric and transitive
but not necessarily reflexive
puts, variables or processes which have been assigned
a higher or equal security level.
Semantic Foundations of Information
Flow Analysis
In order to verify a program analysis or a specific
proof a program's security one must have a formal
specification of what constitutes secure information
flow. The value of a semantics-based specification
for secure information flow is that it contributes
significantly to the reliability of and the
confidence in such activities, and can be used in
the systematic design of such analyses. Many approaches
to Denning-style analyses (including the
original articles) contain a fair degree of formalism
but arguably are lacking a rigorous soundness
proof. Volpano et al [VSI96] claim to give the
first satisfactory treatment of soundness of Den-
ning's analysis. Such a claim rests on the dissatisfaction
with soundness arguments based on an
instrumented operational e.g., [-rb95] or denotational
semantics e.g., [MS92], or on "axiomatic"
approaches which define security in terms of a program
logic [AR80] without any models to relate
the logic to the semantics of the programming lan-
guage. The problem here is that an "instrumented
semantics" or a "security logic" is just a definition,
not subject to any further mathematical justifica-
tion. McLean points out [McL90] in a related discussion
about the (non language-specific) Bell and
LaPadula model:
One problem is that
LaPadula security properties] constitute
a possible implementation of security,
rather than an abstract specification
of what all secure systems must satisfy.
By concerning themselves with particular
controls over files inside the computer,
rather than limiting themselves to the relation
between input and output, they
make it harder to reason about the re-
This criticism points to more abstract, extensional
notions of soundness, based on, for example, the
idea of noninterference introduced in [GM82].
Semantics-based models of Information
Flow
The problem of secure information flow, or "non-
interference" is now quite mature, and very many
specifications exist in the literature - see [McL94]
for a tutorial overview. Many approaches have
been phrased in terms of abstract, and sometimes
rather ad hoc models of computation. Only more
recently have attempts been made to rephrase and
compare various security conditions in terms of
well-known semantic models, e.g. the use of labelled
transition systems and bisimulation semantics
in [FG45]. In this paper we consider the
problem of information-flow properties of sequential
systems, and use the framework of denotational
semantics as our formal model of compu-
tation. Along the way we consider some relations
to specific static analyses, such as the Security
Lambda Calculus [HR98] and an alternative semantic
condition for secure information flow proposed
by Leino and Joshi [LJ98].
1.3
Overview
The rest of the paper is organised as follows.
Section 2 shows how the per-based condition for
soundness of binding times analysis is also a
model of secure information flow. We show
how this provides insight into the treatment
of higher-order functions and structured data.
Section 3 shows how the approach can be
adapted to the setting of a nondeterministic
imperative language by appropriate use of a
powerdomain-based semantics. We show how
the choice of powerdomain (upper, lower or
convex) affects the nature of the security condition
Section 4 focuses on an alternative semantic
specification due to Leino and Joshi. Modulo
some technicalities we show that Leino's condition
- and a family of similar conditions -
are in agreement with, and can be represented
using our form of specification.
Section 5 considers the problem of preventing
unwanted probabilistic information flows in
programs. We show how this can be solved
in the same framework by utilising a probabilistic
semantics based on the probabilistic
powerdomain [JP89].
Per Model of Information
Flow
In this section we introduce the way that partial
equivalence relations (pers) can be used to model
dependencies in programs. The basic idea comes
from Hunts use of pers to model and construct abstract
interpretations for strictness properties in
higher-order functional programs [Hun90, Hun91],
and in particular its use to model dependencies
in binding-time analysis [HS91]. Related ideas already
occur in the denotational formulation of live-
variable analysis [Nie90].
2.1 Binding Time Analysis as Dependency
Analysis
Given a description of the parameters in a program
that will be known at partial evaluation time
(called the static arguments), a binding-time analysis
must determine which parts of the program
are dependent solely on these known parts
(and therefore also known at partial evaluation
time). The safety condition for binding time analysis
must ensure that there is no dependency between
the dynamic (i.e., non-static) arguments and
the parts of the program that are deemed to be
static. Viewed in this way, binding time analysis
is purely an analysis of dependencies. 2
Dependencies in Security In the security
field, the property of absence of unwanted dependencies
is often called noninterference, after
[GM82]. Many problems in security come down
to forms of dependency analysis. For example, in
the case of confidentiality, the aim is to show that
the outputs of a program which are deemed to be
of low confidentiality do not have any dependence
2 Unfortunately, from the perspective of a partial evalua-
tor, BTA is not purely a matter of dependencies; in [HS95]
it was shown that the pure dependency models of [Lau89]
and [HS91] are not adequate to ensure the safety of partial
evaluation.
on inputs of a higher degree of confidentiality. In
the case of integrity (trust ), one must ensure that
the value of some trusted data does not depend on
some untrusted source.
Some intuitions about information flow Let
us consider a program modelled as a function from
some input domain to an output domain. Now
consider the following simple functions mapping inputs
to outputs: snd : D \Theta E ! E for some sets (or
domains) D and E, and shift and test, functions in
N \Theta N ! N, defined by
Now suppose that (h; l) is a pair where h is some
high security information, and l is low, "public do-
main", information. Without knowing about what
the actual values h and l might be, we know about
the result of applying function snd will be a low
value, and, in the case that we have a pair of num-
bers, the result of applying shift will be a pair with
a high second component and a low first component
Note that the function test does not enjoy the
same security property that snd does, since although
it produces a value which is constructed
from purely low-security components, the actual
value is dependent on the first component of the
input. This is what is known as an indirect information
flow [Den76].
It is rather natural to think of these properties
as "security types":
high \Theta low ! low
high \Theta low ! high \Theta low
test : high \Theta low ! high
But what notion of "type", and what interpretation
of "high" and "low" can formalise these more
intuitive type statements? Interpreting types as
sets of values is not adequate to model "high" and
"low". To track degrees of dependence between inputs
and outputs we need a more dynamic view of a
type as a degree of variation. We must vary (parts
of) the input and observe which (parts of) the output
vary. For the application to confidentiality we
want to determine if there is possible information
leakage from a high level input to the parts of an
output which are intended to be visible to a low
security observer. We can detect this by observing
whether the "low" parts of the output vary in any
way as we vary the high input.
The simple properties of the functions snd and
shift described above can be be captured formally
by the following formulae:
Indeed, this kind of formula forms the core of
the correctness arguments for the security analyses
proposed by Volpano and Smith et al [VSI96,
SV98], and also for the extensional correctness
proofs in core of the Slam-calculus [HR98].
High and Low as Equivalence Relations We
show how we can interpret "security types" in general
as partial equivalence relations. We will interpret
high(for values in D) as the equivalence relation
All D , and low as the relation
all
x All D x 0 (3)
x
For a function f binary relations
iff
For binary relations P , Q we define the relation
Now the security property of snd described by (1)
can be captured by
and (2) is given by
2.2 From Equivalence Relations to Pers
We have seen how the equivalence relations All and
may be used to describe security "properties"
high and low . It turns out that these are exactly
the same as the interpretations given to the notions
"dynamic" and "static" given in [HS91]. This
means that the binding-time analysis for a higher-order
functional language can also be read as a security
information-flow analysis. This connection
between security and binding time analysis is already
e.g. [Thi97] for a comparison of
a particular security type system and a particular
binding-time analysis, and [DRH95] which shows
how the incorporation of indirect information flows
from Dennings security analysis can improve binding
time analyses).
It is worth highlighting a few of the pertinent
ideas from [HS91]. Beginning with the equivalence
relations All and Id to describe high and
low respectively, there are two important extensions
to the basic idea in order to handle
structured data types and higher-order functions.
Both of these ideas are handled by the analysis
of [HS91] which rather straightforwardly extends
Launchbury's projection-based binding-time analysis
[Lau89] to higher types. To some extent [HS91]
anticipates the treatment of partially-secure data
types in the SLam calculus [HR98], and the use of
logical relations in their proof of noninterference.
For structured data it is useful to have more refined
notions of security than just high and low ;
we would like to be able to model various degrees
of security. For example, we may have a list of
records containing name-password pairs. Assuming
passwords are considered high , we might like
to express the fact that although the whole list
cannot be considered low , it can be considered as a
(low \Theta high)list. Constructing equivalence relations
which represent such properties is straightforward
see [HS91] for examples (which are adapted directly
from Launchbury's work), and [Hun91] for a
more general treatment of finite lattices of "bind-
ing times" for recursive types.
To represent security properties of higher-order
functions we use a less restricted class of relations
than the equivalence relations. A partial equivalence
relation (per) on a set D is a binary relation
on D which is symmetric and transitive. If P is
such a per let jP j denote the domain of P , given
by
Note that the domain and range of a per P are
both equal to jP j (so for any x; y 2 D, if x P y
then x P x and y P y), and that the restriction
of P to jP j is an equivalence relation. Clearly, an
equivalence relation is just a per which is reflexive
equivalence relations over
various applicative structures have been used to
construct models of the polymorphic lambda calculus
(see, for example, [AP90]). As far as we are
aware, the first use of pers in static program analysis
is that presented in [Hun90].
For a given set D let Per(D) denote the partial
equivalence relations over D. Per(D) is a meet
semi-lattice, with meets given by set-intersection,
and top element All .
Given pers P 2 Per(D) and Q 2 Per(E), we
may construct a new per (D - E) 2 Per(D ! E)
defined by:
If P is a per, we will write x : P to mean x 2 jP j.
This notation and the above definition of P - Q
are consistent with the notation used previously,
since now
Note that even if P and Q are both total (i.e.,
equivalence relations), P - Q may be partial. A
simple example is All -
we know that given a high input, f returns a low
output. A constant function -x:42 has this prop-
erty, but clearly not all functions satisfy this.
2.3 Observations on Strictness and Termination
Properties
We are interested in the security properties of functions
which are the denotations of programs (in a
Scott-style denotational semantics), and so there
are some termination issues which should address.
The formulation of security properties given above
is sensitive to termination. Consider, for example,
the following
Clearly, if the argument is high then the result
must be high. Now consider the security properties
of the function g ffi f where g the constant function
We might like to consider that g has
low . However, if function application
is considered to be strict (as in ML) then g is not in
Hence the function g ffi f does not have security type
high ! low (in our semantic interpretation). This
is correct, since on termination of an application
of this function, the low observer will have learned
that the value of the high argument was non-zero.
The specific security analysis of e.g. Smith and
Volpano [SV98] is termination sensitive - and this
is enforced by a rather sweeping measure: no
branching condition may involve a high variable.
On the other hand, the type system of the SLam
calculus [HR98] is not termination sensitive in gen-
eral. This is due to the fact that it is based on a
call-by-value semantics, and indeed the composition
could be considered to have a security
type corresponding to "high ! low ". The correctness
proof for noninterference carefully avoids
saying anything about nonterminating executions.
What is perhaps worth noting here is that had they
chosen a non-strict semantics for application then
the same type-system would yield termination sensitive
security properties! So we might say that
lazy programs are intrinsically more secure than
strict ones. This phenomenon is closely related to
properties of parametrically polymorphic functions
[Rey83] 3 . From the type of a polymorphic function
one can predict certain properties about its behaviour
- the so-called "free theorems" of the type
[Wad89]. However, in a strict language one must
add an additional condition in order that the theorems
hold: the functions must be bottom-reflecting
3 Not forgetting that the use of Pers in static analysis
was inspired, in part, by Abadi and Plotkin's Per model of
polymorphic types [AP90]
?). The same side condition
can be added to make the e.g. the type system of
the Slam-calculus termination-sensitive.
To make this observation precise we introduce
one further constructor for pers. If R 2 Per(D)
then we will also let R denote the corresponding
per on D? without explicit injection of elements
from D into elements in D? . We will write R?
to denote the relation in Per(D? ) which naturally
extends R by ? R ?.
Now we can be more precise about the properties
of g under a strict (call-by-value) interpreta-
which expresses that g
is a constant function, modulo strictness. More informatively
we can say that that
which expresses that g is a non-bottom constant
function.
It is straightforward to express per properties in
a subtype system of compositional rules (although
we don't claim that such a a system would be in
any sense complete). Pleasantly, all the expected
subtyping rules are sound when types are interpreted
as pers and the subtyping relation is interpreted
as subset inclusion of relations. For the abstract
interpretation presented in [HS91] this has
already been undertaken by e.g. Jensen [Jen92] and
Hankin and Le M'etayer [HL94].
3 Nondeterministic Information
Flow
In this section we show how the per model of security
can be extended to describe nondeterministic
computations. We see nondeterminism as an important
feature as it arises naturally when considering
the semantics of a concurrent language (al-
though the treatment of a concurrent language remains
outside the scope of the present paper.)
In order to focus on the essence of the problem
we consider a very simplified setting - the analysis
of commands in some simple imperative language
containing a nondeterministic choice operator. We
assume that there is some discrete (i.e., unordered)
domain St of states (which might be viewed as
finite maps from variables to discrete values, or
simply just a tuple of values).
3.1 Secure Commands in a Deterministic
Setting
In the deterministic setting we can take the denotation
of a command C, written JCK, to be a function
in [St ? we mean the
set of strict and continuous maps between domains
D? and E? . Note that we could equally well take
the set of all functions in St ! St ? , which is isomorphic
Now suppose that the state is just a simple partition
into a high-security half and a low-security
half, so the set of states is the product St high \Theta
St low . Then we might define a command C to be
secure if no information from the high part of the
state can leak into the low part:
C is secure
Which is equivalent to saying that JCK : (All \Theta
since we only consider strict
functions. Note that this does not imply that JCK
terminates, but what it does imply is that the termination
behaviour is not influenced by the values
of the high part of the state. It is easy to see that
the sequential composition of secure commands is
a secure command, since firstly, the denotation of
the sequential composition of commands is just
the function-composition of denotations, and sec-
ondly, in general for functions
and R 2 Per(F ) it is easy to verify the soundness
of the inference rule:
3.2 Powerdomain Semantics for Nonde-
A standard approach to giving meaning to a non-deterministic
language - for example Dijkstra's
guarded command language - is to interpret a
command as a mapping which yields a set of re-
sults. However, when defining an ordering on the
results in order to obtain a domain, there is a
tension between the internal order of State ? and
the subset order of the powerset. This is resolved
by considering a suitable powerdomain structure
[Plo76, Smy78]. The idea is to define a preoreder
on the finitely generated subsets 4 of S? in terms of
the order on their elements. By quotienting "equiv-
alent sets" one obtains a partial ordering, each depending
on a different view of what sets of values
should be considered equivalent. Consider the following
three programs (an example from [Plo81])
In the "Hoare" or partial correctness interpretation
the first two programs are considered to be equal
since, ignoring nontermination, they yield the same
sets of outcomes. This view motivates the definition
of the Hoare or lower powerdomain, P L [St ? ].
In the "Smyth" or total correctness interpreta-
tion, programs (2) and (3) are considered equal
(equally bad!) because neither of them can guarantee
an outcome. In the general case this view motivates
the Smyth or upper powerdomain, P U [St ?
[Smy78].
In the "Egli-Milner" interpretation (leading to
the convex or Plotkin powerdomain in the general
case) all three programs are considered to have distinct
denotations.
The three powerdomains are built from a domain
D by starting with the finitely generated (f.g.)
subsets of D? (those non-empty subsets which are
either finite, or contain ?), and a preorder on these
sets. Quotienting the f.g. sets using the associated
equivalence relation yields the corresponding do-
main. We give each construction in turn, and give
an idea about the corresponding discrete powerdo-
main P[St ? ].
ffl Upper powerdomain The upper ordering on
f.g. sets u, v, is given by
In this case the induced discrete powerdomain
is isomorphic to the set of finite non-empty
subsets of St together with St ? itself,
ordered by superset inclusion.
y. Here the induced discrete
powerdomain P L [St ? ] is isomorphic to
4 If a set is infinite then it must contain ?.
the powerset of St ordered by subset inclusion.
This means that the domain [St
is isomorphic to all subsets of St \Theta St - i.e.
the relational semantics.
and u - L v. This is also known as the Egli-
Milner ordering. The resulting powerdomain
is isomorphic to the f.g. subsets of
A few basic properties and definitions on pow-
erdomains will be needed. For each powerdo-
main constructor P[\Gamma] define the order-preserving
which takes each
element a 2 D into (the powerdomain equivalence
class of) the singleton set fag. For each function
there exits a unique extension
of f , denoted f where f
which is the unique mapping such that
In the particular setting of the denotations of
commands, it is worth noting that JC 1
K would
be given by:
K:
3.3 Pers on Powerdomains
Give one of the discrete powerdomains, P[St ? ], we
will need a "logical" way to lift a per P
to a per in Per(P[St ? ]).
Definition 1 For each R 2 Per(D? ) and each
choice of power domain P[\Gamma], let P[R] denote the
relation on P[D? ] given by
It is easy to check that P[R] is a per, and in particular
that P[Id D?
Henceforth we shall restrict our attention to
the semantics of simple commands, and hence the
three discrete powerdomains P[St ? ].
Proposition 1 For any f 2 [St
any R, S 2 Per(St ? ),
From this it easily follows that the following inference
rule is sound:
3.4 The Security Condition
We will investigate the implications of the security
condition under each of the powerdomain in-
terpretations. Let us suppose that, as before the
state is partitioned into a high part and a low part:
high \Theta St low . With respect to a particular
choice of powerdomain let the security "type"
high \Theta low ! high \Theta low denote the property
In this case we say that C is secure. Now we explore
the implications of this definition on each of
the possible choices of powerdomain:
1. In the lower powerdomain, the security condition
describes in a weak sense termination-
insensitive information flow. For example, the
program
(h is the high part of the state) is considered
secure under this interpretation but the termination
behaviours is influenced by h (it can
fail to terminate only when
2. In the upper powerdomain nontermination is
considered catastrophic. This interpretation
seems completely unsuitable for security unless
one only considers programs which are
"totally correct" - i.e. which must terminate
on their intended domain. Otherwise, a possible
nonterminating computation path will
mask any other insecure behaviours a term
might exhibit. This means that for any program
C, the program C 8 loop is secure!
3. The convex powerdomain gives the appropriate
generalisation of the deterministic case in
the sense that it is termination sensitive, and
does not have the shortcomings of the upper
powerdomain interpretation.
4 Relation to an Equational
Characterisation
In this section we relate the Per-based security condition
to a proposal by Leino and Joshi [LJ98].
Following their approach, assume for simplicity we
have programs with just two variables: h and l
of high and low secrecy respectively. Assume that
the state is simple a pair, where h refers to the first
projection and l is the second projection.
In [LJ98] the security condition for a program
C is defined by
where "=" stands for semantic equality (the style
of semantic specification is left unfixed), and HH is
the program that "assigns to h arbitrary values" -
aka "Havoc on H". We will refer to this equation as
the equational security condition. Intuitively, the
equation says that we cannot learn anything about
the initial values of the high variables by variation
of the low security variables. The postfix occurrences
of HH on each side mean that we are only
interested in the final value of l. The prefix HH
on the left-hand side means that the two programs
are equal if the final value of l does not depend on
the initial value of h.
In relating the equational security condition to
pers we must first decide upon the denotation of
HH . Here we run into some potential problems
since it is necessary in [LJ98] that HH always
terminates, but nevertheless exhibits unbounded
nondeterminism. Although this appears to pose
no problems in [LJ98] (in fact it goes without
mention), to handle this we would need to work
with non-continuous semantics, and powerdo-
mains for unbounded nondeterminism. Instead, we
side-step the issue by assuming that the domain of
h, St high , is finite.
Secondly we must find common ground for our
semantic interpretation. It is not the style of semantic
definition that is important (viz. operational
vs denotational vs axiomatic), but rather
the interpretation of nondeterminism itself. Leino
and Joshi consider two styles of interpretation with
different treatments of nondeterminism: the relational
interpretation (corresponding to the choice
of the lower powerdomain) and the wlp/wp seman-
tics, which corresponds to the convex powerdomain
interpretation. Leino and Joshi claim that considering
a relational semantics, the security condition
is equivalent to a notion used elsewhere in the lit-
erature. As we shall see, the relational semantics
interpretation of the security condition allows programs
to leak information via their termination be-
haviour, so this observation is not entirely correct.
4.1 Equational Security and Projection
Analysis
A first observation is that the the equational security
condition is strikingly similar to the well-known
form of static analysis for functional programs
known as projection analysis [WH87]. Given
a function f , a projection analysis aims to find projections
(continuous lower closure operators on the
domain) ff and fi such that
For (generalised) strictness analysis and dead-
variable analysis, one is given fi, and ff is to be
determined; for binding time analysis [Lau89] it is
a forwards analysis problem: given ff one must determine
some fi.
For strict functions (e.g., the denotations of
commands) projection analysis is not so readily ap-
plicable. However, in the convex powerdomain HH
is rather projection-like, since it effectively hides
all information about the high variable; in fact it
is an embedding (an upper closure operator) so the
connection is rather close.
4.2 The equational security condition is
subsumed by the per security conditio
Hunt [Hun90] showed that projection properties of
the form fi could be expressed naturally
as a per property of the
for equivalence relations derived from ff and fi by
relating elements which get mapped to the same
point by the corresponding projection.
Using the same idea we can show that the per-
based security condition subsumes the equation
specification in a similar manner.
We will establish the following:
Theorem 1. For any command C
iff
high \Theta low ! high \Theta low :
The idea will be to associate an equivalence relation
to the function HH . More generally, for any
command C let ker(C), the kernel of C, denote
the relation on P[St ?
Define the extension of ker(C) by
A ker (C) B () JCK
Recall the per interpretation of the type signature
of C.
high \Theta low ! high \Theta low
Observe that (All \Theta Id) since for any
The proof of the theorem is based on this observation
and on the following two facts:
Let us first prove the latter fact by proving a
more general statement similar to Proposition 3.1.5
from [Hun91] (the correspondence between projections
and per-analysis). Note that we do not use
the specifics of the convex powerdomain semantics
here, so the proof is valid for any of the three
choices of powerdomain.
Theorem 2. Let us say that a command B is
idempotent iff JB; JBK. For any commands
C and D, and any idempotent command B
Proof.
and JB; C; DKs 1
. Thus s 1
implies that
, since JBKs 1
, and
by idempotence, which
implies
Corollary. Since JHH K is idempotent we can
conclude that
It remains to establish the first fact.
Theorem 3. P[All \Theta
Proof.
Suppose A P[All \Theta need to show
JHH K
bottom-reflecting
((): For the other direction assume JHH K
JHH K B.
Thus, the equational and per security conditions
in this simple case are equivalent.
5 A Probabilistic Security Conditio
There are still some weaknesses in the security condition
when interpreted in the convex powerdo-
main when it comes to the consideration of non-deterministic
programs. In the usual terminology
of information flow, we have considered possibilistic
information flows. The probabilistic nature of
an implementation may allow probabilistic information
flows for "secure" programs. Consider the
program
This program is secure in the convex powerdomain
interpretation since regardless of the value of h, the
value of l can be any value in the range 99g.
But with a reasonably fair implementation of the
nondeterministic choice and of the randomised as-
signment, it is clear that a few runs of the program,
for a fixed input value of h, could yield a rather
clear indiction of its value by observing only the
possible final values of l:
- from which we might reasonably conclude that
the value of h was 2.
To counter this problem we consider probabilistic
powerdomains [JP89] which allows the probabilistic
nature of choice to be reflected in the semantics
of programs, and hence enables us to capture
the fact that varying the value of h causes a
change in the probability distribution of values of
l.
In the "possibilistic" setting we had the denotation
of a command C to be a function in [St ? !
In the probabilistic case, given an input
to C not only we keep track of possible outputs, but
also of probabilities at which they appear. Thus,
we need to consider a domain E [St ? ] of distributions
or evaluations over St ? . The denotation of
C is going to be a function in [St
Let us first present the general construction of
probabilistic powerdomain. If D is an inductively
complete partial order (ipo for short, it has lubs
of directed subsets, and it is countable), then the
probabilistic powerdomain of evaluations E [D] is
built as follows. An evaluation on D, -, is a function
from D to [0; 1] such that
E [D] to be the set of evaluations on D partially
ordered by - iff 8d 6= ?: -d -d.
Define the point-mass evaluation j D (x) for an
ae
0; otherwise.
A series of theorems from [JP89] proves that
is an ipo with directed lubs defined
pointwise, and with a least element
To lift a function f :
we define the extension of f by
The structure (E [D]; j D (x); ) is a Kleisli triple.
and thus we have a canonical way of composing the
probabilistic semantics of any two given programs.
are
such. Then the lifted composition (g ffi f) can be
computed by one of the Kleisli triple laws as g ffif .
The next step towards the security condition is
to define how pers work on probabilistic powerdo-
mains. Recall the definition for pers on powerdo-
mains introduced in section 3. If R 2 Rel(D) then
for
To lift pers to E [D] we need to consider a definition
which takes into consideration the whole of
each R-equivalence class in one go.
Define the per relation E [R] on E [D] for - 2
where [d] R , as usual, stands for the R\Gammaequivalence
class which contains d. Naturally, -
As an example, consider E[(All \Theta Id) ? ] Two distributions
- and - in (All \Theta Id) ? ! [0; 1] are equal
if the probability of any given low value l in the
left-hand distribution, given by
h -(h; l), is equal
to the probability in the right hand distribution,
namely
To make sure we have built a stronger security
model, let us prove that the Egly-Milner power-
domain security condition follows from the probabilistic
powerdomain one. In other words,
Theorem 4. Suppose R and S are equivalence
relations on D. For any command C we have
R -E [S] implies JCK C
Proof. For some a; b 2 D let
deduce a R b =)
-e. What we need
to prove is a R b =) JCK C a P C [S] JCK C b. So,
assume a R b and let us show that
Take any x 2 JCK C a. Observe, that -x ? 0,
since if x is a possible output of program C run on
data a, then the probability of getting this output
must be greater then 0. Therefore,
-e. There must exist a y 2 [x] S such that
-y ? 0. Thus, y is a possible output of program
C run on data b, and y 2 JCK C b. y 2 [x] S implies
Let us derive the probabilistic powerdomain security
condition for the case of two variables h and
l and domain . C is secure iff
l
l
-e
low :-? &
h 02St high
So, a command C is secure iff
and
h2St high
h2St high
for any i l
h and o l . Intuitively the equation
means that if you vary i h the distribution of low
variables (the sums provide "forgetting" the highs)
does not change.
Let us introduce probabilistic powerdomain semantics
definitions for some language constructs.
Here we omit the E-subscripts to mean the probabilistic
semantics. Given two programs
such
that JC 1
the composition of two program semantics
is defined by
The semantics of the uniformly distributed nondeterministic
choice
is defined by
Consult [JP89] to get a full account on how to define
the semantics of other language constructs.
Example. Recall the program
Let us check the security condition on it. Take
The left-hand
side is
whereas the right-hand side is
So, the security condition does not hold and the
program must be rejected.
We recently became aware of probabilistic security
type-system due to Volpano and Smith [VS98]
with a soundness proof based on a probabilistic
operational semantics. Although the security
condition that they use in their correctness argument
is not not directly comparable - due to
the fact that they consider parallel deterministic
threads and a non-compositional semantics - we
can easily turn their examples into nondeterministic
sequential programs with the same probabilistic
behaviours, and it seems (although we have not
checked all of the details) that their examples can
all be verified using our security condition.
6 Conclusions
There are many possible extensions to the ideas
we have sketched, and also many limitations. We
consider a few:
Multi-level security There is no problem with
handling lattices of security levels rather than the
simple high-low distinction. But one cannot expect
to assign any intrinsic semantic meaning to
such lattices of security levels, since they represent
a "social phenomenon" which is external to the
programming language semantics. In the presence
of multiple security levels one must simply formulate
conditions for security by considering information
flows between levels in a pairwise fashion
(although of course a specific static analysis is able
to do something much more efficient).
Downgrading and Trusting There are operations
which are natural to consider but which
cannot be modelled in an obvious way in an extensional
framework. One such operation is the
downgrading of information from high to low without
losing information - for example representing
the secure encryption of high level information.
This seems impossible since an encryption operation
does not lose information about a value and
yet should have type high ! low - but the only
functions of type high ! low are the constant func-
tions. An analogous problem arises with -rbaek
and Palsberg's trust primitive if we try to use pers
to model their integrity analysis [-P97].
Operational Semantics We are not particularly
married to the denotational perspective
on programming language semantics. There are
also interesting operational formulations of pers
on a higher-order language, based on partial-
bisimulations. We hope to investigate these further
Constructing Program Analyses Although
the model seems useful to compare other formali-
further work is needed to show that it can
assist in the systematic design of program analyses.
Concurrency Handling nondeterminism can be
viewed as the main stepping stone to formulating
a language-based security condition for concurrent
languages, but this remains a topic for further
work.
--R
A per model of polymorphism and recursive types.
An axiomatic approach to information flow in programs.
Secure Computer Systems: Unified Exposition and Multics Interpretation.
A lattice model of secure information flow.
Semantic foundations of binding-time analysis for imperative programs
A classification of security properties for process algebra.
Security policies and security models.
The SLam calculus: Programming with secrecy and integrity.
Binding Time Analysis: A New PERspec- tive
A semantic model of binding times for safe partial evaluation.
PERs generalise projections for strictness analysis.
Abstract Interpretation of Functional Languages: From Theory to Practice.
Abstract Interpretation in Logical Form.
A probabilistic powerdomain of evaluations.
Projection Factorisations in Partial Evaluation.
A semantic approach to secure information flow.
The specification and modeling of computer security.
models.
A security flow control algorithm and its denotational semantics correctness proof.
A powerdomain con- struction
"Pisa Notes"
Types, abstraction and parametric polymorphism.
Journal of Computer and Systems Sciences
Secure information flow in a multi-threaded imperative language
University of Nottingham (submitted for publication)
Probabilistic noninterference in a concurrent language.
A sound type system for secure flow anal- ysis
Theorems for free.
Projections for strictness analysis.
--TR
--CTR
Kyung Goo Doh , Seung Cheol Shin, Detection of information leak by data flow analysis, ACM SIGPLAN Notices, v.37 n.8, August 2002
Pablo Giambiagi , Mads Dam, On the secure implementation of security protocols, Science of Computer Programming, v.50 n.1-3, p.73-99, March 2004
Stephen Tse , Steve Zdancewic, Translating dependency into parametricity, ACM SIGPLAN Notices, v.39 n.9, September 2004
Sebastian Hunt , David Sands, On flow-sensitive security types, ACM SIGPLAN Notices, v.41 n.1, p.79-90, January 2006
Nick Benton , Peter Buchlovsky, Semantics of an effect analysis for exceptions, Proceedings of the 2007 ACM SIGPLAN international workshop on Types in languages design and implementation, January 16-16, 2007, Nice, Nice, France
Aslan Askarov , Andrei Sabelfeld, Localized delimited release: combining the what and where dimensions of information release, Proceedings of the 2007 workshop on Programming languages and analysis for security, June 14-14, 2007, San Diego, California, USA
Anindya Banerjee , Roberto Giacobazzi , Isabella Mastroeni, What You Lose is What You Leak: Information Leakage in Declassification Policies, Electronic Notes in Theoretical Computer Science (ENTCS), 173, p.47-66, April, 2007
Gilles Barthe , Leonor Prensa Nieto, Formally verifying information flow type systems for concurrent and thread systems, Proceedings of the 2004 ACM workshop on Formal methods in security engineering, October 29-29, 2004, Washington DC, USA
Nick Benton, Simple relational correctness proofs for static analyses and program transformations, ACM SIGPLAN Notices, v.39 n.1, p.14-25, January 2004
Roberto Giacobazzi , Isabella Mastroeni, Abstract non-interference: parameterizing non-interference by abstract interpretation, ACM SIGPLAN Notices, v.39 n.1, p.186-197, January 2004
Mads Dam, Decidability and proof systems for language-based noninterference relations, ACM SIGPLAN Notices, v.41 n.1, p.67-78, January 2006
Torben Amtoft , Anindya Banerjee, A logic for information flow analysis with an application to forward slicing of simple imperative programs, Science of Computer Programming, v.64 n.1, p.3-28, January, 2007
Steve Zdancewic , Andrew C. Myers, Secure Information Flow via Linear Continuations, Higher-Order and Symbolic Computation, v.15 n.2-3, p.209-234, September 2002
Heiko Mantel , Andrei Sabelfeld, A unifying approach to the security of distributed and multi-threaded programs, Journal of Computer Security, v.11 n.4, p.615-676, 01/01/2004
Andrew C. Myers , Andrei Sabelfeld , Steve Zdancewic, Enforcing robust declassification and qualified robustness, Journal of Computer Security, v.14 n.2, p.157-196, January 2006
Anindya Banerjee , David A. Naumann, Stack-based access control and secure information flow, Journal of Functional Programming, v.15 n.2, p.131-177, March 2005 | powerdomains;semantics;noninterference;confidentiality;partial equivalence relations;security;probabilistic covert channels |
609230 | Formalization and Analysis of Class Loading in Java. | Since Java security relies on the type-safety of the JVM, many formal approaches have been taken in order to prove the soundness of the JVM. This paper presents a new formalization of the JVM and proves its soundness. It is the first model to employ dynamic linking and bytecode verification to analyze the loading constraint scheme of Java2. The key concept required for proving the soundness of the new model is augmented value typing, which is defined from ordinary value typing combined with the loading constraint scheme. In proving the soundness of the model, it is shown that there are some problems inside the current reference implementation of the JVM with respect to our model. We also analyze the findClass scheme, newly introduced in Java2. The same analysis also shows why applets cannot exploit the type-spoofing vulnerability reported by Saraswat, which led to the introduction of the loading constraint scheme. | Introduction
Unlike its predecessor, C++, Java supports platform-independent bytecodes
which are compiled from source programs written in Java, sent over the network
as mobile codes, and executed by the Java Virtual Machine running within a
local application such as a Web browser.
The JVM links bytecodes sent over the network in a type-safe manner, whose
meaning is as follows.
It is guaranteed that linking bytecodes, with their type information consistent
with themselves, does not destroy the consistency of the JVM state,
which has its own type information.
The following requirement should also be satised.
If the current JVM state is consistent with its own type information, then
at its next execution step, it is still consistent.
Java is in this way a type-safe language, if these two requirements are satised.
Note that if the above consistency is broken, then the JVM incorrectly interprets
the contents pointed to by its inner pointer references. The type safety of the
JVM guarantees the memory safety of the JVM, and thus it plays the primary
role in the Java Security. In order to show the type safety property of the JVM,
Akihiko Tozawa and Masami Hagiya are at the Graduate School of Science, University
of Tokyo, Japan. Any suggestions or comments to this article are appreciated. Please send
email: fmiles, hagiyag@is.s.u-tokyo.ac.jp
a number of studies have been made so far. They are brie
y summaried in the
next subsection. This paper gives another formalization of the JVM and proves
its soundness.
Bytecodes running inside the JVM are structures into classes, each of which
is separatedly loaded and linked by the JVM. Objects called class loaders have
the responsibility of loading and linking a class. By supporting a variety of
class loaders, the JVM achieves the
exibility of class loading. However, this
exibility has been causing problems with respect to the above mentioned type-safety
of Java.
The rst contribution of our work is that of developing the new model of
the JVM. It is described in Section 2 of this paper in detail. Our model has
several improvements over those dened in the previous studies. First of all, it
includes class loaders. Java class loaders are instances of a user-dened class,
whose primary function is a map from class names to class objects. They are,
however, closely related to how the JVM internally builds its class environments.
In other words, the class environments of the JVM are not statically given, but
they are lazily built by dynamic evaluations of class loaders. Our model gives
dynamic environments which represent internal heaps of the JVM. It seems to
be the best approach to model the lazy linking semantics of the JVM by its
class loaders. Since we do not consider any static language from which these
environments can be constructed, our model is a practical one that faithfully
re
ects JVM implementations.
In 1998, Sun Microsystems released the version 1.2 of Java Development
Kit (JDK1.2), which is based on the newly proposed design principle called
Java2. This comes with the rewritten specication of the JVM, The Java Virtual
Machine Specication (2nd Edition) [9]. The most important feature introduced
to the new specication is the loading constraint scheme, which is originally
introduced by Liang and Bracha [8]. The scheme is the x of Saraswat's problem
[13] related to the unique design of Java class loaders. Our formalization also
includes this scheme. The second contribution of our work is that we have
found three problems inside the current o-cial implementation of the JVM
with respect to this scheme. These problems are not trivial ones because they
require a careful analysis of the scheme, which is done through our work on the
formalization. They are described in Section 3.
The third contribution of our work is that of proving the soundness of our
model, given in Section 2. The key notion required for the soundness proof is
the augmented value typing, which is dened from the ordinary value typing
combined with the loading constraint scheme. This new typing is shown to be
consistent with the subtype relation under the existence of the loading constraint
scheme. Note that this consistency is crucial for the soundness of the model, and
the problems we found inside the current o-cial implementation of the JVM
are due to its violation against the consistency.
Another new feature of Java2 is the findClass implementation of the class
loading. Technically speaking, both the loading constraint scheme and the
findClass scheme give constraints to the dynamic class loading of Java. Such
constraints restrict what will happen in the future, so that they further complicate
the lazy linking semantics of Java. For example, the problem we found
in the loading constraint scheme is sensitive to the timing of introducing con-
straints. This fact re
ects the subtleties of the semantics of such constraints.
The analysis of the findClass scheme in Section 4 is the fourth contribution.
By the analysis, we can also answer the old question why applets cannot cause
Saraswat's problem [13].
1.1 Related Work
Since the Java security deeply relies on the JVM and its type-safety, giving
formal models to the JVM is recently one of the major research issues in network
security.
Stata and Abadi [15] gave the JVM model including its bytecode verication.
They try to grasp the JVM as a type system, and its bytecode verication as
typing rules, whereby the correctness of the JVM is proved in the form of a
soundness theorem. Our work lays its concept for modeling the bytecode verier
on Stata and Abadi's work. In other words, we extended their model to cope
with class loaders.
Freund and Mitchell worked on a specic problem related to object initialization
[4]. Their work is also based on Stata and Abadi's.
Qian rst succeeded in modeling a large part of the JVM [12]. Formalizations
covering a wider range of the JVM (but without proofs) were given by Goldberg
[5] and Jensen, et al. [6]. In particular, Jensen, et al. dealt with Saraswat's
bug by modeling both class loading and operational semantics. Saraswat [13]
himself gave formal explanations to his problem. Dean also gave a formal model
of class loading [2]. However, it cannot explain the type spoong problems.
The lazy semantics of class loaders has not yet been fully modeled as far as
we know. We think that it has something to do with the study of modularity
as in the work by Cardelli [1].
Machine verications are also being applied to prove the type soundness of
Java [10][11].
1.2 Organization
The rest of the paper is structured as follows. Section 2 gives a basic formal
model of the JVM and also a soundness proof of the model. Section 3 mainly
explains problems with an implementation of the JVM. In Section 4, we give
further discussions related to the findClass scheme. We formally answer the
old question why applets cannot cause Saraswat's problem [13] in Section 4. We
also have discovered a new method to implement the loading constraint scheme
e-ciently.
Model and its Soundness
The formal model of the JVM presented in this paper has the following new
features compared to those in previous studies.
Modeling lazy class loading of the JVM.
Dening loading constraints.
Modifying the value typing statement.
Giving a rigorous denition of an environment, which is enough to model
all the above features.
In particular, lazy class loading of the JVM was rst modeled in this study.
Note also that for the last feature, it is necessary to dene the well-formedness
type
type option := None
Some of
type Env := f
!(StringString listT Class )
cl
(iface : String list)
class
I : Instruction list
lvars : TValue list
list
type VerifyRecord := f
lvars : String list
stack : String list
invokevirtual of String String String
areturn
::::
Figure
1: Type denitions
of an environment, which in turn is dened in terms of the well-formedness of
its various components.
In this section, we rst explain several basic denitions. Section 2.4 gives
the denition of the loading constraint scheme. Our main soundness theorem is
described in Section 2.5.
2.1 Environments
2.1.1 Denition of Environments
An environment, which represents an internal heap of the JVM, is the most
basic data structure in our formalization. Of course, every component of such
a heap cannot be modeled. Instead we give a well-dened set of components to
which some mathematical examinations are possible. It is represented by type
Env.
See the denition of types in Figure 1. An environment, i.e., an element of
Env, is a record consisting of four subsets, T Class , T Loader , TValue and T Method , of
the set, Loc, of locations, and ve maps, C, W , R. The meaning
of each subset should be clear from its index. As we will note at the end of
this section, type Env can be considered as a dependent type, since each map
and R) has an arrow type, which is constructed by using the
subsets (T Class , T Loader , TValue and T Method ) as types. Members M, C and V map
locations to appropriate denitions, i.e., they return denitions stored in E from
their references. Member W represents class loading, while R represents method
resolution. These two maps are explained later. We use the notation, E:m, to
extract member m from environment E.
Figure
2: Abbreviations inside judgments
For any type , option denotes the type composed of constructor None
and values of wrapped with constructor Some. For example, the value of
E:W(l)(n) is either None or of the form Some c. For any type , list denotes
the type of nite lists of . For list, the i-th element of x is denotes by
x[i], and its length by len(x).
Note that in Classdef(T ),
T denotes the tuple consisting of T Class , T Loader , TValue and T Method . We dene
T Class and Classdef(T ) separately, so that there are two kinds of identity between
classes. The same situation happens for values and methods. We usually
identify an object by its reference.
A state, i.e., an element of State(T ), is either exeption , which denotes an
error state, or a record consisting of four elements, pc, method, lvars and stack.
Elements of VerifyRecord give types to the local variables and the stack of a
state. They are used in the bytecode verication. The type of instructions,
Instruction, is not dened completely. For the sake of this study, we only assume
the invokevirtual instruction and the areturn instruction.
Type theories use a -type for representing such types as Env. Tuple T is
considered as an element of the index set, Index, dened as follows.
We use a reference, i.e., an element of Loc, to access a component of an en-
vironment. Of course, there are dierent ways to dene and identify these
components. In some studies a class is identied by a tuple of a class name and
a class loader, because there should not be two classes with the same name and
loader inside a single JVM.
Our model gives a low level abstraction which re
ects practical JVM im-
plementations. In any such implementation, the class identity property is only
one of the various properties implicitly satised by the internal structure of the
heap. In practice, there is no two classes with the same name and loader only
when class loaders are synchronized. If two threads start loading the same class
by the same loader at the same time, the property would easily be violated.
Here we are not going into how to implement the JVM which always satises
such properties. Instead we have extracted a set of properties, which are
essential for the soundness of the model, called the well-formedness of the en-
vironment, and given an assumption that any modication or update of the
environment must preserve this well-formedness. The well-formedness of the
environment, denoted by wf (E), is composed of the following properties.
The property related to the classes inside an environment.
The property related to the methods inside an environment.
The property that states that all methods are statically veriable, i.e.,
have gone through the bytecode verication.
The Bridge Safety property related to the loading constraint scheme.
The above properties are denoted by wf class(E), wf method (E), veried(E)
and bridge safe(E), and are described in Sections 2.2.4, 2.3.5, 2.5.1 and 2.4.3,
respectively.
2.2 Objects, Classes and Loaders
This section gives basic denitions of objects, classes and class loaders. Each
class of Java is actually an object instance of java.lang.Class and each class
loader is of java.lang.ClassLoader, but unlike other objects, they play essential
roles in the JVM architecture, which refers to them automatically and
implicitly. Therefore, it is appropriate to dene objects, classes and class loaders
separately.
2.2.1 Classes
Let us rst explain the denition of classes in Figure 1. A class denition,
i.e., an element of Classdef(T ), is a tuple consisting of a class name, a direct
super class name and a class loader. As we have noted, if we assume c is a
class reference, then the components of c's denition are simply written inside a
judgement as c:name, c:super and c:cl, respectively. The two other components
of c, the method dispatch table, c:mt, and the implemented interfaces of the
class, c:iface, will be discussed later.
2.2.2 Class Loaders
Each Java object instantiating java.lang.ClassLoader has a private member
declared as a hashtable which maps names of classes to class objects themselves.
We model this mapping with E:W as follows.
Denition 2.1 (Subtyping)
Denition 2.2 (Widening)
cl
Denition 2.3 (Normal Value Typing)
Denition 2.4 (Well-Formedness)
Figure
3: Predicates related to classes
The class name resolution n l (= E:W(l)(n)) results in Some c if name n is
associated with a certain class c by class loader l, and None if not.
We also say that n is resolved into c by l.
In the above sense, each class loader has its own name space. Therefore, a
class loader is also called a context. Each class c has its dening class loader
c:cl, whereby every class name resolution related to c is done. In this sense, c:cl
is the context of c. Throughout this paper, we use the word context and class
loader as synonyms.
As
Figure
shows, each value
uniquely refers to a class, and thus to a class loader. We say that value v is
created in context v:class:cl or method m is executed in context m:cc:cl.
2.2.3 Subtyping
Subtyping between classes is dened in Figure 3. In our model, predicate sub
represents the relation.
For class c, if its parent class name, c:super, is resolved into c 0 in context
c:cl, then c is a direct subtype of c 0 . The (indirect) subtyping is dened
as the re
exive and transitive closure of the direct subtyping.
2.2.4 Well-Formedness of Classes
An actual class loader of the JVM resolve classes in one of the following two
ways.
When a class loader, l, denes class c whose name is n, it sets c:cl to l,
and resolves n into c.
When a class loader, l, delegates class loading to another class loader, it
resolves n into c whose c:cl is not equal to l.
Actual class loaders therefore satisfy the following statement, the well-formedness
of classes dened in Figure 1, since each class should be once dened by the
above rst process which sets n l to that class.
For each c in E:T class , c:name is resolved into c itself in context c:cl.
Note that this statement immediately implies the class identity property described
in Section 2.1.3.
2.2.5 Object Values and States
If a JVM state, : State(E:T ), is not an error state, exception , it has the following
components.
:method denotes the method that the JVM is now processing.
:pc, the program counter, points to a certain instruction of :method:I.
Both :lvars and :stack are represented by a list of values which stores
contents of the local variables and the local stack, respectively.
While both the state and the environment represent internal data structures of
the JVM, their transitions, ! and , respectively, are distinct and independent
In our model, all values found inside :lvars or :stack are objects. An object
value, v, is a record with just one member, v:class, which represents a type (class)
of that value 1 . In practice, a Java object value may also have a reference to an
array of values, i.e., its eld values. There is no di-culty to extend our model to
represent elds, since it already has a value heap. We can just add v:eld, which
represents a list of value references. Our model omits elds of Java, because the
get=puteld instructions are in large part similar to invokevirtual and also easier
to handle.
2.2.6 Predicates with Respect to Subtyping and Value Typing
We introduce two predicates with respect to subtyping, one of which represents
the widening conversion, and the other represents the typing judgment of values.
Their formalizations are given in Figure 3.
The widening conversion denes a quasi-order 2 between class names. The
order between two names is dened only if both of them are resolved in cl ,
except for the case that two names are equal. The ordering itself follows
the subclass relation between the classes into which names are resolved.
The normal value typing is a typing judgment which denotes that the type
of value v is a subtype of n in context cl . It is also dened only if n is
already resolved in cl .
Note that both predicates depend on a context, cl .
2.3 Methods
This section simply describes the specication of invokevirtual [9], while the practical
method invocation of the JVM involves further di-culties and problems.
Such remaining problems will be explained in Section 3.
2.3.1 Instructions
As dened in Figure 1, our JVM model includes only two instructions, invokevirtual
and areturn. Instruction invokevirtual is the most fundamental but the most non-
1 Java has a null value which represents an uninitialized object value. We do not model this
for simplicity.
2 According to our denition, both sub and cl
satisfy transitivity but not anti-symmetry.
Of course, they should do so in practice.
trivial instruction of the JVM. It is used with three arguments as follows.
The above judgement means that the current JVM state, , is now about to
process invokevirtual, which calls a certain instance method. String name is the
name of the method, and string list desc is the descriptor representing the type
signature of the method, where
are argument types (i.e., class names) of the method, and desc[0] is its return
value type. String classname is the name of the symbolically referenced class [9].
Instruction areturn is one of the return instructions of the JVM, which returns
to the caller, holding one object value as a return value.
2.3.2 Method Invocation
Execution of invokevirtual consists of the following three processes.
The method resolution looks up a method according to the three arguments
of invokevirtual. The method found by this process is called the
symbolically referenced method and denoted by SRmethod.
The method selection looks up a method accessible from the class of the
object on top of the local stack.
The method invocation calls the method found by the method selection
process.
Assume that invokevirtual is inside a method whose class is c and whose
context is l(= c:cl). The method resolution process is modeled as follows.
We just look up the method, SRmethod, which is already registered to the en-
vironment, E:R (cf. Section 2.1), by the given name and descriptor, name
and desc. SRclass, the symbolically referenced class, is the result of resolving
classname by l. Remember that (name; desc; SRclass) l abbreviates E:R(l)(name; desc; SRclass).
Its further denition is given at the end of this section and also in Section 3.
Practically, the JVM remembers whether a symbolic reference is already re-solved
or not, so that it never resolves the same reference twice 3 . Our model
re
ects this behavior.
The method selection process will be dened in Section 3. In this section,
we just assume that the process results in a unique value, denoted by method.
When we write a judgement having E and in its left hand side, we assume
that SRmethod and method inside the right hand side denote the symbolically
referenced method and the selected method, respectively.
' SRmethod method
Although the dention of the value, method , is left blank until we discuss
the implementation later, we can examine the soundness of the JVM, i.e., the
3 This is implemented by the internal class representation which attaches a
ag to each
resolvable constant pool entry, which remembers whether the entry is resolved or not. It also
remembers the result of resolution.
correctness of the new loading constraint scheme (cf. Section 2.4), provided that
we assume Lemmas 1, 1' and 1", described in Sections 2.3.4 and 2.3.5.
The value, SRmethod , is identical to the value of the same variable, appearing
in the predicate, invv OK (E; ), dened below.
Denition 2.5 (invokevirtual OK)
where Some
The predicate means that the JVM is now about to execute invokevirtual, and
the method resolution has normally succeeded, i.e., it has not raised exceptions.
Unlike what is specied about the method resolution, once invv OK (E; ) holds,
i.e., the method resolution succeeds, our JVM model ensures that the method
selection and method invocation will also succeed. This is because method is
always dened in such a case.
Formally, name, desc, classname, SRclass, SRmethod and objectref in the
denition of invv OK (E; ) should be existentially quantied. However, in a
situation where we assume invv OK (E; ), we refer to them in a judgement as
if they leak out from the denition of invv OK (E; ), i.e., those variables are
assumed to satisfy invv OK (E; ).
2.3.3 Operational Semantics
Denition 2.6 describes the state transition semantics of invokevirtual, which is
dened in accordance with the specication of the three processes of invokevir-
tual. (Note that @ denotes list concatenation.)
It rst tries to resolve its method. If it cannot be resolved,
If the method can be resolved, it generates a new state, in , whose method
is the selected one. Its local variables should store the proper value of
values of arguments, each of which is originally
stored in :stack.
There must be a transition of the form
where out should point at instruction areturn, the return instruction of
the JVM. This means that the invoked method can be safely executed.
At last, out :stack has a return value on its top, which will be pushed onto
denotes the transitive closure of the one-step transition, !.
Transitions for other instructions are not given in this paper (See [15] or [7]).
exception .
Denition 2.6 (Operational Semantics of invokevirtual)
exeception
in :lvars = [objectref ]@arg ;
in
in
out :method:I[ out
[retval
[retval]@rest ;
State <
references to
method :cc
Figure
4: Subtyping relations related to invokevirtual
2.3.4 Invocation Correctness
With respect to SRmethod and method , the following lemma should be con-
sidered, while the lemma itself will be examined in Section 3. In the following
several sections, the lemma will be assumed to hold.
(Correctness of invokevirtual)
Let E be an environment, and be a JVM state.
method overrides SRmethod
In other words, Lemma 1 (Correctness of invokevirtual) guarantees that if
the JVM is about to execute invokevirtual, then
if the JVM is in a safe state, a method whose dening class is a superclass
of the class of objectref should be invoked.
Furthermore, if the JVM is in a safe state, then a method that overrides
the symbolically referenced method should be invoked.
The second premise of the lemma, the subtyping relation between objectref :class
and SRclass , is a property that should be invariantly satised by a safe execution
of the JVM (cf. Section 2.4.1). The rst consequence of the lemma, the
subtyping relation between objectref :class and method :cc, is derived from the
method selection algorithm described in Section 3.1.1. See Figure 4.
The method overriding relation, overrides, has not yet been dened for certain
reasons. One reason is that although the term, override, is often used in the
specication of the JVM, its accurate meaning is not dened, so that its interpretation
depends on implementations. In this paper, we will dene predicate
overrides in Section 3.1.1 in conjunction with predicate select.
The predicate, overrides, should satisfy the following conditions.
LEMMA 1'
The implementation of Sun's JDK1.2 is inconsistent with Lemma 1' and exposes
a new
aw, as we will explain in Section 3. The relation, , between
environments is dened in Section 2.5.3.
2.3.5 Well-Formedness of Methods
The following well-formedness property of methods is required mainly for the
discussions in Section 4.
selects SRmethod
Predicate selects models the method selection process and its denition can be
found in Denition 3.1 in Section 3.1.1.
class RT { .
RR new RR();
. }
Figure
5: Saraswat's bug code
With predicate selects, the equivalence of descriptors should also hold.
LEMMA 1"
2.4 Bridge Safety
Bridge safety of the JVM is a notion originally introduced by Saraswat in his
report on type-spoong in JDK1.1 [13]. He insisted that applet loaders never
suers from his bug because they never break this property. Sheng and Bracha
have devised a x of Saraswat's bug, implemented in JDK1.2 [8][9], which forces
the JVM to check the bridge safety at runtime.
2.4.1 Type-Spoong in JDK1.1
See the source code in Figure 5, which revealed the bug of type-spoong in
JDK1.1. The code itself has nothing suspicious, but if there exist two contexts
l 1 and l 2 (the code itself runs under l 1 ) and delegation of class loading
RR
is dened, the invocation of r.speakup() would result in a serious violation of
the type system of the JVM.
Expression r.speakUp() is complied into the following instruction of the
JVM.
The above ()V is dierent from our notation of method descriptors. It represents
a method which takes no argument and returns nothing.
With respect to this invokevirtual, its objectref , the value on top of the
local stack, is equal to the value of r, which comes from context l 2 via the
invocation of rr.getR().
l 2
The symbolically referenced class, SRclass, is the class into which the
current context l 1 resolves R (the third string of invokevirtual).
l 1
The code causes a problem when R
l 1 is dierent from R
l 2 . For any invokevirtual
to be correctly executed, the subtyping relation
Denition 2.7 (Loading Constraints)
(method overriding)
(method resolution)
cl m:desc[i] m:cc:cl
(re
exivity)
cl n
cl
cl n
cl
cl 00
cl n
cl 00
cl n
cl 0
cl 0 n
cl
is absolutely necessary (of course, objectref :class = SRclass su-ces). Recall
that this subtyping relation is the second premise of Lemma 1. In other words,
method in Section 2.3 is completely unrelated to SRmethod .
More accurately, the bug results in the incompatibility of the dispatch tables
of two classes, by applying a method index obtained from SRclass onto
a completely unrelated method table of objectref :class. JDK1.1 would thus
either invoke method with argument values of incompatible types or just core-
dump. The method dispatch table, which is excluded from Sun's specication
and therefore from our model, is an implementation technique of the method
selection algorithm (cf. Section 3.1.3).
2.4.2 Loading Constraints
Go back to the example of Section 2.4.1. Suppose that the JVM has already
noticed at the invocation of rr.getR() that the method brought a value of type
R from context l 2 to l 1 . In this case we can check beforehand if this
ow of the
value is acceptable or not. We may simply try to check it as follows.
l 2
But this attempt should fail, since before the method resolution of speakUp(),
R
l 1 cannot be evaluated, i.e., under the environment, E ' R
we must consider an alternative, i.e., the loading constraint scheme. In this case,
the new scheme introduces the following loading constraint.
R
If the JVM remembers the above constraint, R
l 1 is not allowed to be resolved
into a dierent class from R
l 2 in the method resolution process.
How the JVM can notice the possibility of a value
ow is not easy to un-
derstand. It is related to Lemma 3 (Existence of Constraints) described later in
this section. Here, we only give rules to introduce the loading constraints.
For any method m, which overrides m 0 , the following constraint is introduced
with respect to each class name n appearing in their descriptor.
For any method resolution at which nds SRmethod , the following constraint
is introduced (for each class name n appearing in the descriptor of
SRmethod ).
SRmethod :cl
Relation n
is dened to be transitive, re
exive and symmetric, so that it is an
equivalence relation. It should also be the minimal relation among all satisfying
the above conditions. Denition 2.7 formally describes the predicate by
inductive denition.
As shown later in Section 2.5, the relation is -invariant, i.e., relation
between the environments subsumes n
. This enables the JVM to incrementally
construct the relation, n
, as it resolves a method reference of invokevirtual, or
it links a class whose method overrides another method. This is the reason
why the loading constraint scheme is light-weight, and why it has been actually
adopted among many other solutions. The history of Saraswat's problem and
its solutions are described in [8][13].
2.4.3 Bridge Safety Predicate
Denition 2.8 denes predicate bridge safe(E) as follows.
For environment E, any class loader in E has made no resolutions of
classes which contradict with loading constraints existing inside E.
This predicate forbids any environment modication, i.e., class loading and
method linking, which violates constraints. It is the same as what JDK1.2 is
doing.
2.4.4 Augmented Value Typing
The augmented value typing,
cl n, is dened as follows.
There is another context cl 0 , in which v has type n in the normal sense,
cl 0 n, and there also exists the following constraint.
cl n
cl 0
The intent of the denition may not obvious. Roughly, it means that value v
itself has been created, i.e., instantiated from n, in context cl 0 , and it has been
transfered to the current context, cl.
This predicate was called dynamic conformity in our previous paper [16]. As
we noted in the introduction, it is crucial for proving the type soundness of the
loading constraint scheme, and its denition is one of the contributions of this
study.
With respect to this predicate, the following lemma is important.
cl -invariance of Augmented Typing)
Denition 2.8 (Bridge Safety)
bridge safe(E) def
cl
cl n
cl cl cl 0
cl 0
Denition 2.9 (Augmented Value Typing)
cl
cl n
cl cl 0 n
list
cl (ns : String list)
cl ns[i]
Let E be an environment, n and n 0 be class names, and cl be any class loader,
bridge cl n 0
cl
cl
Proof
The following fact is proved by examining Def.2.3 Def.2.9 and Def.2.8.
bridge
cl n =)
(n cl 6= None =) cl n)
Assume the rst line of the lemma. From Def.2.2 Def.2.3 and Def.2.1,
we have
cl n =) v :: cl
By applying E ' cl n
cl to Def.2.9, the lemma is proved.
The lemma states that the augmented typing is invariantly satised against the
widening conversion described in Section 2.2.6. This is an important statement
which relates runtime typing to static typing, i.e., the bytecode verication.
The proof of the lemma requires that the denition of the widening conversion,
cl n 0 , force additional loadings of n cl and n 0cl in case of n 6= n 0 .
We have found two inconsistencies between our denition of cl and the
bytecode verication of JDK1.2. By exploiting each of these, we can still entirely
escape additional checks newly imposed. See Section 3 or [16] for detail.
Note that it is relatively easy to show that the augmented typing is also
-invariant, which will be used later in the soundness proof.
2.4.5 Constraint Existence
Lemma 3 (Existence of Constraints) states that if the JVM is in a safe state
and about to execute invokevirtual, there already exist loading constraints for
each class name appearing in the descriptor of invokevirtual between the current
context and the context of the invoked method.
We now assume Lemma 1 (Correctness of invokevirtual). Note that E '
cl
desc[]
cl 0 abbreviates cl
desc[i]
cl 0 .
Denition 2.10 (Verication Rules)
(rule of
(rule of invokevirtual)6 6 6 6 6 6 4
(i):stack[0] m:cl cname
(rule of areturn)4
LEMMA 3 (Existence of Constraints)
Let E be an environment, and be a JVM state.
method:desc[] method :cl
Assume the rst line of the lemma. From Def.2.7(method overriding
and symmetry) and Lemma 1, we have
' SRmethod :cl method:desc[] method :cl:
From Def.2.7(method rosolution) and Def.2.5, we have
Applying Lemma 1' and Def.2.7(transitivity) yields the lemma.
2.5 Soundness
This section follows the framework of the soundness proof by Stata and Abadi [15].
One dierence is in the introduction of environments. Another is in the treatment
of invokevirtual, the instruction that does not exist in their model.
2.5.1 Bytecode Verication
One uniqueness underlying the language design of Java is found in its bytecode
verication. The idea is to guarantee the runtime well-typedness by the static
verication, which allows minimum type checks at runtime.
See Denition 2.10. Type VerifyRecord represents an imaginary store for the
static class name information about local variables and local stacks. A map, ,
stores an element of VerifyRecord for each instruction of :method. The bytecode
verication is a problem of nding such that is consistent with the veried
method, m.
Denition 2.11 (k-transition)
exeception
out
kn
Denition 2.12 (well-typedness)
veries :method
:cl (:pc):lvars
:cl (:pc):stack
It should be veried that contents of satisfy all m:cl relations imposed
by the verication rules.
In our model, the consistency of is represented by predicate veries. Predicate
veried(E) states that each method in E has already been veried.
The well-typedness predicate, E ' wt(), means that state is safe inside
environment E.
Both :lvars and :stack should have their values typed by the class names
recorded in which veries :method.
State exception representing an error state is always well-typed.
Note that :lvars ::
:cl (:pc):lvars abbreviates 8i
:cl
(:pc):lvars[i].
The interesting work by Yelland [17] implements the bytecode verier based
on the type inference of Haskell.
2.5.2 Soundness Theorem
Here is our main theorem.
THEOREM (Soundness)
be environments, and and 0 be states.
This theorem states that any execution step of the JVM preserves the well-typedness
While the intuitive meaning of the invariant
is not so trivial, it is su-cient to guarantee the correctness of invokevirtual. If
then we have
which is the second premise of Lemma 1. Any instruction that is not modeled
in this paper also preserves this invariant, whereby the correct behavior of the
instruction should be guaranteed.
The theorem can be divided into two lemmas in the following sections.
Denition 2.13 (Sub-Environment relation)
String list E:T Class :
2.5.3 Soundness of Environment Updates
Denition 2.13 describes how environments can be updated as the JVM dynamically
links objects, classes, etc. It denes the sub-environment relation,
between two environments.
Following is the rst lemma needed to prove the main soundness theorem.
This lemma states that any modication of environments preserves the well-
typedness, and its proof is as follows. Remember that we are assuming Lemmas
1, 1' and 1".
We rst assume
guarantees that any components of E are compatible with
those of E 0 . Therefore, from Def.2.13 and Def.2.1, we have
We also have, from Def.2.13 and Def.2.3,
cl cl n:
We say that relations sub and :: cl are -invariant. Furthermore,
Lemma 1' and Def.2.13 guarantee the following facts, respectively.
They imply that relation n
is also -invariant, because it is minimal
among those that satisfy Denition 2.7. Therefore, the augmented
cl , is also -invariant, because it is dened in Def.2.9 as
follows.
cl cl n
cl 0
From Def.2.12, the lemma is proved.
2.5.4 Soundness of State Transitions
Following is the second lemma for the soundness theorem.
This lemma states that every state transition under a xed environment, E,
preserves well-typedness.
Before examining the lemma, we redene state transitions as described in
which adds depth k to each transition. Obviously, we have
Note that 0
denotes a state transition to exception or a transition by an instruction
other than invokevirtual, which we do not dene in this paper.
The proof of the lemma is as follows:
We rst note that the following fact holds.
cl cl n
cl 0
cl 0
It expresses the n
-invariance of the augmented typing and can be
easily proved by examining Def.2.9 and Def.2.7(transitivity). By the
transitivity of ! , the following fact is a su-cient condition of the
lemma.
This is proved by induction on k. The base case of the induction is
as follows.
Its proof is omitted here. See [15] or [7] for detail.
The remaining subgoal is to show the case of k+1 from the induction
hypothesis.
Assume the rst line of [iii]. From the only state transition rule that
denes
where in and out are dened as in Def.2.6. From the induction
hypothesis, we have
holds here, there exists that satises the following
condition.
:cl ([:pc]):stack
Applying Def.2.10(rule of invokevirtual) and Lemma 2, we obtain
where x[0; ::; n 1] denotes a sublist, [x[0]; ::; x[n 1]], and n is the
length of desc. Assuming invv OK (E; ), we have
Lemma 1 can be used here. It implies
method:cl method :cc:name
together with wf class(E). On the other hand, Lemmas 1, 1' and 1"
imply
together with wf method(E). We can then use Lemma 3 and [i] to
obtain
method:cl method :desc[1; ::; n 1]:
Therefore, Lemma 2 and Def.2.10(rule of imply
method:cl 0 (0):lvars
for some 0 that satises 0 veries method(= in :method) and
Finally, Def.2.6 implies
in :lvars = :stack[0; ::; n
and therefore
From the induction hypothesis, [iv], we have
of areturn) and Lemma 2 lead to
method:cl method :desc[0]:
Similarly as above, we can use Lemma 3 and [i] to obtain
Finally, Def.2.10(rule of invokevirtual), Lemma 2, Def.2.6 and Def.2.12
imply
Denition 3.1 (Predicate selects)
3 Analysis of Implementations
Another main topic of this paper is the analysis of Sun's JVM implementation.
The latter half of this section describes several
aws that we have found in
JDK1.2 with respect to Saraswat's bug and the loading constraint scheme. Before
describing the
aws, let us examine Lemma 1 (Correctness of invokevirtual).
3.1 (Correctness of invokevirtual)
3.1.1 Method Selection
In Section 2.3, we dened the method invocation processes only partially. In this
section, Denition 3.1 formalizes the recursive procedure employed in method
resolution and selection. We dene it as predicate selects.
A pair of a key and a class, (key ; c), selects method m whose key is equal
to key , if m is found by a lookup over the subtype tree from c to its
superclasses.
In the denition, key denotes the pair, (name; desc), and the key of m, m:key,
denotes a pair (m:name; m:desc). If ( ; desc; c) selects m, then we have the following
The latter condition is what Lemma 1" requires. The predicate is also -
invariant from its denition.
In the denition of selects, the selection process terminates once a method,
m, is found inside c, no matter whether another method, m 0 , can be found
inside some superclass of c or not. However, in such a case, i.e., if m makes m 0
invisible from c, we say that
For any class c and method m such that (m:key; c) selects m, if a direct
superclass of c selects a dierent method, m 0 , with its key the same as
m:key, we dene
Predicate overrides is then dened as the re
exive and transitive closure of predicate
overrides - .
If we faithfully follow Sun's specication, SRmethod and method are simply
dened as follows.
The method resolution process will nd SRmethod which satises the following
selects SRmethod
The method selection process will nd method which satises the following.
selects method
As we see below, the above two denitions derive Lemma 1 (Correctness of
invokevirtual) by simply applying the premise of the lemma, objectref :class sub SRclass,
to the dention of overrides.
By the denitions and Lemma 1",
method :desc
holds no matter whether a method overriding exists or not. This is because the
specication explicitly uses desc to select a method. However, the implementation
diers from the speciciation in that it employs the method dispatch table
of a class to select a method.
As long as the above descriptor equivalence holds, the JVM never falls into
an error state or coredumps, even though state may be badly-typed. This is
the reason why it is di-cult for the specication to explain the type-spoong
problem. Figure 6 describes the problem graphically.
3.1.2 Proof of Lemma 1
In this paper, we only show Lemma 1 with respect to the above specication,
though it must be and can be proved for existing implementations.
We rst show the existence of method , since in our model the method
selection process should always succeed. Assume invv OK (E; ),
which implies E; ' (SRmethod :key; SRclass) selects SRmethod . Also
assume
from which Denition 3.1 implies the existence of method that sat-
ises
As we already noted, relation select holds between objectref :class
and method :cc, so we obtain the rst consequence of the lemma.
The second consequence is the following.
method overrides SRmethod
This can be proved by induction on sub , because overrides is the reexive
and transitive closure of overrides - . If objectref :class sub SRclass
holds, then either method = SRmethod or method overrides - SRmethod .
3.1.3 Method Dispatch Table
A method dispatch table is a list of methods which satises the following conditions
A method is selected by a class, c, i the method is inside the method
dispatch table of the class, c:mt.
An overriding method has the same index as the overridden one, i.e.,
for any class c and any superclass c 0 of c, and any index i less than the
length of c 0 :mt.
The JVM can incrementally build such a table structure for each class that
satises the above conditions, by referring to the table of its direct superclass,
which has already been built. Here, suppose that we already have a method
dispatch table for each class. Since SRclass:mt is a collection of methods selected
by SRclass , the method resolution process of JDK1.2 searches inside SRclass :mt.
In the method resolution, threfore, this dierence between the specication and
the implementation by method dispatch tables cannot be seen from outside.
On the other hand, as to the method selection process, the implementation
remarkably diers from the specication. It is an O(1)-time procedure rather
than a recursive procedure represented by predicate selects. A method selection
process is as follows.
There is index i which satises
because SRclass selects SRmethod .
The selected method, method , is
The above selection process is also sound, as it also satises Lemma 1 (Correct-
ness of invokevirtual). For the proof of the lemma, the sub relation,
should necessarily be used.
In the implementation, the equivalence of the descriptors, desc = method :desc,
and even the existence of method , depend on this sub relation. Without such
a relation, index i of SRclass :mt has no meaning inside method :mt. By exploiting
the inconsistency between desc and method :desc, one can falsify an integer
value as an object value, and vice vasa. If method does not exist, the JVM
coredumps [13].
3.2 Bytecode Verier and Loading Constraints
3.2.1 Problem with Respect to the Widening Conversion
Now we go back to Saraswat's bug code (Figure 5) in Section 2.4. Suppose that
a modication described below is applied to the original bug code.
original code modied code6 4
new R'()
R is not load by L 1
R
L1 inside method resolution
loaded
JDK1:2 . :r ::
L1 R R L 2 not loaded VIOLATION
VIOLATION JDK1:1 #
spec: . &
Executing r.speakUp()
Some
In any case where
method
even if method is unexpected,
execution continues.
It is posible that
method :desc 6= desc,
or in a worse case, method may not
exist. Type confusion or coredump.
Figure
Type spoong chart
class RR {
public R getR() {
return new R'(); // originally ``new R()''
Assume that L 2 loads class RR, and L 1 loads class RT, which invokes rr.getR()
and also r.speakUp(). It is also assumed that L 1 resolves R dierently from L 2 .
The method invocation, r = rr.getR(), which calls a method inside class RR,
returns a value not of type R but of type R'. Inside context L 2 , R' is a subtype
of R.
Recall that to check this widening conversion, R L2 should have already been
resolved. Therefore, the constraint,
R
will be checked at the invocation of r.speakUp(), when R
L1 is resolved.
Assume, conversely, that R L2 has not been resolved yet. Even though class
R L1 has been resolved, and though the above constraint indeed exists, the constraint
will never be checked. Figure 6 describes what happens in such a case.
In fact, JDK1.2 sometimes does not resolve R L2 , although its resolution is a
role of the bytecode verier.
3.2.2 Two Inconsistencies
We have found two inconsistencies in the bytecode verier of JDK1.2 against
our model, each of which still enables the type spoong. These inconsistencies
are as follows.
Some widening conversion, n cl n 0 , is not correctly checked and n 0 is not
resolved.
System classes are not veried at runtime.
These bugs are brie
y explained in [16] with example codes.
Let us emphasize the signicance of our work. The problem is concerned
with the augmented typing, which is an alternative of the naturally dened
typing. Since the bytecode verication seemed unrelated with Saraswat's bug,
the designer of JDK1.2 did not modify the bytecode verication of JDK1.1.
However, the JVM requires Lemma 2 ( cl -invariance) as well, which relates
the well-typedness invariant to the bytecode verication based on the widening
conversion. It is our model that makes all of these points clear and visible.
3.3 Interfaces and Loading Constraints
In addition to the above problem with its bytecode verier, one more
aw inside
the JDK1.2 implementation was found during the analysis of the invokeinterface
instruction, which has been excluded from our model so far.
3.3.1 The invokeinterface Instruction
In order to discuss the problem of invokeinterface, we extend our model to be
able to deal with the interfaces of Java. The only thing to do is to allow a class
to have multiple parent classes. Throughout this section, a class, c, has its list
of the names of implemented interfaces, c:iface.
We redene the subtyping relation, c sub c 0 , i.e., c is a direct subtype of
We introduce a new predicate, is class(c), which is true i c is a pure class,
i.e., c is not an interface.
Though any class has no more than a single direct supertype in the previous
sections, this fact was not used throughout our soundness proof, which
thus requires no further changes. The invokeinterface instruction, which similarly
resolves and selects a method, will have exactly the same semantics as
invokevirtual. The only modication must be considered with predicate selects.
If pure class c itself does not declare a method with the required key, the
predicate should select a method not inside implementing interfaces of c, but
inside some pure superclass of c. Therefore, we should redene the second rule
of Denition 3.1 as follows.
class(c) =) is class(c
3.3.2 Problem with Respect to Constraint Existence
A question may be raised about the code in Figure 7. Is D.getR overriding
I.getR, or not? If we accept our denition of overrides, the answer is yes,
even though there is no subtyping relation between the declaring classes of two
methods. Since class D selects D.getR for the key representing R getR() and
one of its direct supertype, I, also selects I.getR, we conclude that D.getR
overrides I.getR from the denition of override,
In fact, the following code invokes a method inside class D safely.
interface I {
R getR();
class D {
R getR() { . }
class C extends D implements I {}
Figure
7: Problem of invokeinterface
I
Since JDK1.2 fails to recognize such complex method overriding relations,
the code in Figure 7 is what brings another problem of the loading constraint
scheme. The overriding relation between D.getR and I.getR is not recognized,
so that there may be no constraint between the loaders that dene D and I with
respect to the name of the return type, R.
4 The findClass Scheme
4.1 Formalization of the findClass Scheme
There is also one more new feature in Java2, i.e., the implementation of class
loadings by findClass. Java2 recommends to implement class loaders by
findClass rather than the old loadClass method, though loadClass is also
accepted for backward compatibility.
The findClass scheme denes a tree structure among class loaders. The
delegation of class loadings should follow this tree structure.
In the old version of Java, Applet loaders are implemented in a similar manner
as the findClass scheme. It is also known that applets never cause the
Saraswat's problem (though it has never been proved completely). This leads
to the following question. Can the findClass scheme replace the loading constraint
scheme?
Our last theorem, Theorem (Trusted Environments), which will be proved
in Section 4.2, gives a negative answer.
Even if we follow this findClass scheme, class loadings may violate constraints
unless no delegations are allowed other than those to system loaders
As the above theorem states, and also as Saraswat has correctly mentioned,
applet loaders are safe since they only delegate to system loaders. However, as
[8] describes in its rst half, class loaders are recently increasing their variety
of applications. Consider an applet loader which delegates to another applet
loader. Although such loaders seem to be safe at a glance, the theorem correctly
states that they possibly violate constraints.
4.1.1 Parent Loaders
The following denitions incorporate the findClass scheme into our model.
Denition 4.1 (Parent Loader)
true
l vP :Parentdef l
l vP l 0
The direct parent loader of each loader. If
l does not have a parent loader, we assume P l. The inductively dened
predicate, l vP l 0 , denotes that l is one of the (indirect) parents of l 0 .
Denition 4.2 (Correct Delegation to Parent Loaders)
The denition formalizes the following delegation strategy of Java2. If a loader,
l, has its direct parent loader, l 0 , any class loading by l is rst delegated to l 0 .
The class is loaded by l itself only if l 0 cannot resolve the class name. Otherwise,
l will return the same class as l 0 returns.
In the above denition,
=) expresses that there is no loading delegation that does not follow the
parent loader relation.
(= expresses that two loaders in the parent loader relation correctly delegate
class loadings according to the above strategy.
The second condition implies that if c:cl vP l, then l has already resolved
c:name. Therefore, if wf parent(E; P ) holds, each loader in E is considered to
have resolved all the classes it can do so. In other words, E represents such
an environment that all possible class loadings have been completed. Ordinary
environments are thus considered as sub-environments of such environments.
We therefore use dient font, such as E , for those environments.
Note that wf parent(E ; P ) implies wf class(E).
4.1.2 Parent Environment
We dene a relation between envirnoments that represents an extension of environments
through delegations to parent loaders.
Denition 4.3 (Parent Environment)
The second line of the denition of E that a method loaded by a
loader in E should already exist in E . The third line states that delegations are
allowed only to a direct parent loader in E .
4.2 Trusted Environments
Theorem (Trusted Environments) states that even if we follow the ndClass
scheme, the JVM never violates constraints only if all parent loaders are system
loaders.
We st dene system environments.
Denition 4.4 (System Environment)
The above predicate denes a condition that system environments should satisfy.
That is, when all system classes are loaded, any class name appearing in any
method descriptor should have been resolved.
THEOREM (Trusted Environments)
The following proposition is satised if and only if i 1,
called a trusted environment if it satises the consequence of the theorem.
The theorem states that E 1 is a trusted environment and E 2 is not. For example,
applet loaders are inside a certain E 1 and applets never violate constraints.
Prior to the proof of the theorem, we introduce an additional relation. The
l 0 , means there is a one-step constraint between l and l 0 .
We allow (method resolution), (method overriding), and (symmetry) in Deni-
tion 2.7. If we ignore the symmetry, there are two cases in which E ' l n
l 0
holds. Assume wf method(E) in Section 2.3.5 for now.
There is a method resolution
where l Remember that l should be the context which
resolves classname.
This implies that if wf parent(E; P ), then SRclass:cl vP l. Note that
also holds from wf method (E) and Def.3.1.
Generally, if the one-step subtyping relation, holds, then for
any P such that wf parent(E; P ), we have
from Def.2.1 and Def.4.2. Consequently, the subtyping relation, sub , also
implies vP . Therefore, if wf parent(E; P ), then
holds.
Otherwise, we have the overrides relation,
In this case, according to Section 3, there
are a class, c, which selects m, and a superclass of c, which selects m 0 .
The fact implies the following one.
Therefore,
From Def.4.1, if a loader has two dierent (indirect or direct) parent load-
ers, then one is a (direct) parent of the other, so we have
In both cases, we have the following result with respect to n
l 0 =) l vP l 0 _ l 0 vP l ::[i]
Furthermore, we can easily show the following in both cases.
Note that l <P l 0 abbreviates l vP l From these results, Theorem
(Trusted Environment) can be proved as follows:
(proof)
Assume that there are given E
Assume that l and l 0
l 0 and also l <P l 0 . Def.4.3
implies that l should be a system loader. Therefore [ii] implies:
Note that a method dened by some system loader should be a
system method (cf. Def.4.3). From Def.4.4, we have
Our next goal is to prove the following fact.
This is proved by induction on the derivation of the loading con-
straint, l n
l 0 . Suppose that the following one-step constraint is
appended to above l n
l 0 to generate l n
l 00 by the transitivity.
l 00
We assume by the induction hypothesis and show
l 0 vP l 00 _ l 00
The left case of the disjunction, l 0 vP l 00 , is easy since vP is transi-
tive. Therefore, assume l 00 <P l 0 . In this case, [iii] says that there
exists c 0 in E 1 :T Class which satises
Together with Def.4.2, this leads to
the induction hypothesis implies
implies
Some c;
so l 00 is derived from E 1 ' c 0 :cl vP l 00 .
Lemma [iv] naturally leads to
bridge
It is easy to show that bridge safe(E) holds if
We can make a counter example which violates con-
straints. For example, E 1 is as follows.
loader 0.
{ Loader 0 denes class C.
Class C has method M with signature [X].
does not satisfy system(E 1 ), we assume that class X is never
resolved by loader 0, i.e.,
where m is the method M resolved by loader 0 (= m:cl). Additionally,
loader 1.
{ Loader 1 denes class D.
Class D calls method M of class C with signature [X].
{ Loader 1 denes class X.
{ Loader 1 delegates to loader 0 for other classes.
loader 2.
{ Loader 2 denes class D.
Class D extends C.
Class D has method M with signature [X].
{ Loader 2 denes class X.
{ Loader 2 delegates to loader 0 for other classes.
We can assume that
where c is the class, C, dened by loader Obviously, M in loader 2
overrides m. Therefore, from the denition of - , we have
2:
However, X in loader 1 and X in 1 are dierent. Therefore, bridge
does not hold.
5 Conclusion
We have presented a new model of the JVM, which explains various unique features
of the JVM, and also species several conditions on its implementations.
In particular, the model includes the loading constraint scheme and the ndclass
scheme, both of which are new features of JDK1.2, Through the formalization,
we could analyze the extremely subtle relationship between the loading constraint
scheme and the bytecode verication. We believe that such an analysis
is possible only through a rigorous formalization and soundness proofs.
However, our model excludes many features of the JVM: its primitive types,
eld members, array types, member modiers, threads, most of its instructions,
etc. We have several ideas to incorporate them into our model. For example,
our model can easily express the object and class nalization of the JVM. The
soundness theorem in Section 2.5 states that when an environment is updated
into a larger environment, the well-typedness invariant is preserved. There-
fore, if we can introduce a reduced environment which also preserves the same
invariant, then the soundness of the nalization is guaranteed.
As for the ndclass scheme, we showed that it should work with the loading
constraint scheme. However, we have also obtained a method which allows
some loading constraints to be omitted under the cooperation with the ndclass
scheme. This result is not included in this paper, as we do not think that it is
the best solution, and we expect that both schemes should be improved in the
future.
--R
Linking and Moduralization
Formal Aspects of Mobile Code Security
Web Browers and Beyond.
A Type System for Object Initialization in the Java Bytecode Language
A speci
Security and Dynamic Class Loading in Java: A Formalisation
On a New Method for Data ow Analysis of Java Virtual Machine Subroutines
Dynamic Class Loading in the Java Virtual Ma- chine
The Java Virtual Machine Speci
Java light is type-safe - de nitely
Proving the Soundness of a Java Bytecode Veri
A Formal Speci
Java is not type-safe
Nicht veri
A Type System for Java Bytecode Subroutines
Careful Analysis of Type Spoo
A compositional account of the Java virtual machine
--TR
--CTR
Modeling multiple class loaders by a calculus for dynamic linking, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus | class loading;security |
609232 | Dependent Types for Program Termination Verification. | Program termination verification is a challenging research subject of significant practical importance. While there is already a rich body of literature on this subject, it is still undeniably a difficult task to design a termination checker for a realistic programming language that supports general recursion. In this paper, we present an approach to program termination verification that makes use of a form of dependent types developed in Dependent ML (DML), demonstrating a novel application of such dependent types to establishing a liveness property. We design a type system that enables the programmer to supply metrics for verifying program termination and prove that every well-typed program in this type system is terminating. We also provide realistic examples, which are all verified in a prototype implementation, to support the effectiveness of our approach to program termination verification as well as its unobtrusiveness to programming. The main contribution of the paper lies in the design of an approach to program termination verification that smoothly combines types with metrics, yielding a type system capable of guaranteeing program termination that supports a general form of recursion (including mutual recursion), higher-order functions, algebraic datatypes, and polymorphism. | Introduction
Programming is notoriously error-prone. As a conse-
quence, a great number of approaches have been developed
to facilitate program error detection. In practice, the programmer
often knows certain program properties that must
hold in a correct implementation; it is therefore an indication
of program errors if the actual implementation violates some
of these properties. For instance, various type systems have
been designed to detect program errors that cause violations
of the supported type disciplines.
It is common in practice that the programmer often knows
for some reasons that a particular program should terminate
if implemented correctly. This immediately implies
that a termination checker can be of great value for detecting
program errors that cause nonterminating program ex-
Partially supported by NSF grant no. CCR-0092703
ecution. However, termination checking in a realistic programming
language that supports general recursion is often
prohibitively expensive given that (a) program termination
in such a language is in general undecidable, (b) termination
checking often requires interactive theorem proving that can
be too involved for the programmer, (c) a minor change in a
program can readily demand a renewed effort in termination
checking, and (d) a large number of changes are likely to be
made in a program development cycle. In order to design a
termination checker for practical use, these issues must be
properly addressed.
There is already a rich literature on termination verifica-
tion. Most approaches to automated termination proofs for
either programs or term rewriting systems (TRSs) use various
heuristics, some of which can be highly involved, to
synthesize well-founded orderings (e.g., various path orderings
[3], polynomial interpretation [1], etc. While these
approaches are mainly developed for first-order languages,
the work in higher-order settings can also be found (e.g.,
[7]). When a program, which should be terminating if implemented
correctly, cannot be proven terminating, it is often
difficult for the programmer to determine whether this
is caused by a program error or by the limitation of the
heuristics involved. Therefore, such automated approaches
are likely to offer little help in detecting program errors that
cause nonterminating program execution. In addition, automated
approaches often have difficulty handling realistic (not
necessarily large) programs.
The programmer can also prove program termination in
various (interactive) theorem proving systems such as NuPrl
[2], Coq [4], Isabelle [8] and PVS [9]. This is a viable practice
and various successes have been reported. However, the
main problem with this practice is that the programmer may
often need to spend so much time on proving the termination
of a program compared with the time spent on simply implementing
the program. In addition, a renewed effort may
be required each time when some changes, which are likely
in a program development cycle, are made to the program.
Therefore, the programmer can often feel hesitant to adopt
(interactive) theorem proving for detecting program errors in
general programming.
We are primarily interested in finding a middle ground. In
particular, we are interested in forming a mechanism in a programming
language that allows the programmer to provide
information needed for establishing program termination
else if
withtype {i:nat,j:nat} <i,j> => int(i) -> int(j) -> [k:nat] int(k)
Figure
1. An implementation of Ackerman function
and then automatically verifies that the provided information
indeed suffices. An analogy would be like allowing the user
to provide induction hypotheses in inductive theorem proving
and then proving theorems with the provided induction
hypotheses. Clearly, the challenging question is how such
information for establishing program termination can be
formalized and then expressed. The main contribution of this
paper lies in our attempt to address the question by presenting
a design that allows the programmer to provide through
dependent types such key information in a (relatively) simple
and clean way.
It is common in practice to prove the termination of recursive
functions with metrics. Roughly speaking, we attach a
metric in a well-founded ordering to a recursive function and
verify that the metric is always decreasing when a recursive
function call is made. In this paper, we present an approach
that uses the dependent types developed in DML [18, 14] to
carry metrics for proving program termination. We form a
type system in which metrics can be encoded into types and
prove that every well-typed program is terminating. It should
be emphasized that we are not here advocating the design
of a programming language in which only terminating programs
can be written. Instead, we are interested in designing
a mechanism in a programming language, which, if the programmer
chooses to use it, can facilitate program termination
verification. This is to be manifested in that the type system
we form can be smoothly embedded into the type system of
DML. We now illustrate the basic idea with a concrete example
before going into further details.
In
Figure
1, an implementation of Ackerman function is
given. The withtype clause is a type annotation, which
states that for natural numbers i and j, this function takes
an argument of type int(i) and another argument of type
and returns a natural number as a result. Note that
we have refined the usual integer type int into infinitely
many singleton types int(a) for
such that int(a) is precisely the type for integer expressions
with value equal to a. We write fi:nat,j:natg
for universally quantifying over index variables i and j of
nat , that is, the sort for index expressions with values
being natural numbers. Also, we write [k:nat] int(k)
which represents the sum of all types
:. The novelty here is the pair hi; ji
in the type annotation, which indicates that this is the metric
to be used for termination checking. We now informally
explain how termination checking is performed in this case;
assume that i and j are two natural numbers and m and n
have types int(i) and int(j), respectively, and attach the
metric hi; ji to ack m n; note that there are three recursive
function calls to ack in the body of ack; we attach the metric
hi 1; 1i to the first ack since m 1 and 1 have types
int(i 1) and int(1), respectively; similarly, we attach the
metric hi 1; ki to the second ack, where k is assumed to
be some natural number, and the metric hi; j 1i to the third
ack; it is obvious that hi
and hi; j 1i < hi; ji hold, where < is the usual lexicographic
ordering on pairs of natural numbers; we thus claim
that the function ack is terminating (by a theorem proven in
this paper). Note that although this is a simple example, its
termination cannot be proven with (lexicographical) structural
ordering (as the semantic meaning of both addition
and subtraction is needed). 1
More realistic examples are to be presented in Section
5, involving dependent datatypes [15], mutual recursion,
higher-order functions and polymorphism. The reader may
read some of these examples before studying the sections on
technical development so as to get a feel as to what can actually
be handled by our approach.
Combining metrics with the dependent types in DML
poses a number of theoretical and pragmatic questions. We
briefly outline our results and design choices.
The first question that arises is to decide what metrics we
should support. Clearly, the variety of metrics for establishing
program termination is endless in practice. In this pa-
per, we only consider metrics that are tuples of index expressions
of sort nat and use the usual lexicographic ordering
to compare metrics. The main reasons for this decision are
that (a) such metrics are commonly used in practice to establish
termination proofs for a large variety of programs and
(b) constraints generated from comparing such metrics can
be readily handled by the constraint solver already built for
type-checking DML programs. Note that the usual structural
ordering on first-order terms can be obtained by attaching to
the term the number of constructors in the term, which can be
readily accomplished by using the dependent datatype mechanism
in DML. However, we are currently unable to capture
structural ordering on higher-order terms.
The second question is about establishing the soundness
of our approach, that is, proving every well-typed program
in the type system we design is terminating. Though the idea
mentioned in the example of Ackerman function seems intu-
itive, this task is far from being trivial because of the presence
of higher-order functions. The reader may take a look
at the higher-order example in Section 5 to understand this.
We seek a method that can be readily adapted to handle various
common programming features when they are added,
1 There is an implementation of Ackerman function that involves only
primitive recursion and can thus be easily proven terminating, but the point
we drive here is that this particular implementation can be proven terminating
with our approach.
including mutual recursion, datatypes, polymorphism, etc.
This naturally leads us to the reducibility method [12]. We
are to form a notion of reducibility for the dependent types
extended with metrics, in which the novelty lies in the treatment
of general recursion. This formation, which is novel to
our knowledge, constitutes the main technical contribution
of the paper.
The third question is about integrating our termination
checking mechanism with DML. In practice, it is common
to encounter a case where the termination of a function f depends
on the termination of another function g, which, unfor-
tunately, is not proven for various reasons, e.g., it is beyond
the reach of the adopted mechanism for termination checking
or the programmer is simply unwilling to spend the effort
proving it. Our approach is designed in a way that allows the
programmer to provide a metric in this case for verifying the
termination of f conditional on the termination of g, which
can still be useful for detecting program errors.
The presented work builds upon our previous work on the
use of dependent types in practical programming [18, 14].
While the work has its roots in DML, it is largely unclear,
a priori, how dependent types in DML can be used for establishing
program termination. We thus believe that it is a
significant effort to actually design a type system that combines
types with metrics and then prove that the type system
guarantees program termination. This effort is further
strengthened with a prototype implementation and a variety
of verified examples.
The rest of the paper is organized as follows. We form
a language ML ;
0 in Section 2, which essentially extends
the simply typed call-by-value -calculus with a form of dependent
types, developed in DML, and recursion. We then
extend ML ;to ML ;
in Section 3, combining metrics
with types, and prove that every program in ML ;
0; is termi-
nating. In Section 4, we enrich ML ;
with some significant
programming features such as datatypes, mutual recursion
and polymorphism. We present some examples in Section 5,
illustrating how our approach to program termination verification
is applied in practice. We then mention some related
work and conclude.
There is a full paper available on-line [16] in which the
reader can find details omitted here.
start with a language ML ;
0 , which essentially extends
the simply typed call-by-value -calculus with a form
of dependent types and (general) recursion. The syntax for
ML ;is given in Figure 2.
2.1 Syntax
We fix an integer domain and restrict type index expres-
sions, namely, the expressions that can be used to index a
type, to this domain. This is a sorted domain and subset sorts
can be formed. For instance, we use nat for the subset sort
0g. We use (~) for a base type indexed with
a sequence of index expressions~, which may be empty. For
instance, bool(0) and bool(1) are types for boolean values
false and true, respectively; for each integer i, int(i) is the
singleton type for integer expressions with value equal to i.
We use satisfaction relation, which means
P holds under , that is, the formula ()P , defined below, is
satisfied in the domain of integers.
For instance, the satisfaction relation
holds since the following formula is true in the integer domain
Note that the decidability of the satisfaction relation depends
on the constraint domain. For the integer constraint domain
we use here, the satisfaction relation is decidable (as we do
not accept nonlinear integer constraints).
We use a :
: for the usual dependent
function and sum types, respectively. A type of form
: is essentially equivalent to a
where we use ~a : ~
for
n . 2 We also introduce
-variables and -variables in ML ;and use x and
f for them, respectively. A lambda-abstraction can only be
formed over a -variable while recursion (via fixed point op-
erator) must be formed over a -variable. A -variable is a
value but a -variable is not.
We use for abstracting over index variables, lam for abstracting
over variables, and fun for forming recursive func-
tions. Note that the body after either or fun must be a
value. We use hi j ei for packing an index i with an expression
e to form an expression of a dependent sum type, and
open for unpacking an expression of a dependent sum type.
2.2 Static Semantics
We write ' : to mean that is a legally formed type
under and omit the standard rules for such judgments.
index substitutions I ::= [] j I [a 7! i]
substitutions ::= [] j [x 7! e] j [f 7! e]
A substitution is a finite mapping and [] represents an empty
mapping. We use I for a substitution mapping index variables
to index expressions and dom( I ) for the domain of
I . Similar notations are used for substitutions on variables.
We write [ I ] ([]) for the result from applying I
() to
, where can be a type, an expression, etc. The standard
In practice, we also have types of form ~a : ~
: , which we omit here
for simplifying the presentation.
index constants c I ::=
index expressions i ::= a j c I j
index propositions P
index sorts
index variable contexts ::=
index constraints ::=
types ::= (~)
contexts
constants c ::= true
expressions e ::= c j x
values
Figure
2. The syntax for ML ;;
Figure
3. Typing Rules for ML ;definition is omitted. The following rules are for judgments
of form ' I : 0 , which roughly means that I has "type"
We write dom() for the domain of , that is, the set of
variables declared in . Given substitutions I
and , we say
We write for the congruent extension of
index expressions to types, determined by
the following rules. It is the application of these rules that
generates constraints during type-checking.
We present the typing rules for ML ;in
Figure
3. Some
of these rules have obvious side conditions, which are omit-
ted. For instance, in the rule (type-ilam), ~a cannot have free
occurrences in . The following lemma plays a pivotal r"ole
in proving the subject reduction theorem for ML ;, whose
standard proof is available in [14].
Lemma 2.1 Assume ; derivable and
holds. Then we can derive ; '
2.3 Dynamic Semantics
We present the dynamic semantics of ML ;through the
use of evaluation contexts defined below. Certainly, there are
other possibilities for this purpose, which we do not explore
here. 3
evaluation contexts E ::=
We write E[e] for the expression resulting from replacing
the hole [] in E with e. Note that this replacement can never
result in capturing free variables.
Definition 2.2 A redex is defined below.
are redexes for false , which reduce
to e 1 and e 2 , respectively.
(lam x : :e)(v) is a redex, which reduces to e[x 7! v].
Let e be fun f [~a : ~
e is a redex, which
reduces to ~a : ~
:v[f 7! e].
:v)[~] is a redex, which reduces to v[~a 7!~].
open hi j vi as ha j xi in e is a redex, which reduces
to e[a 7! i][x 7! v].
We use r for a redex and write r ,! e if r reduces to e. If
e, we write e 1 ,! e 2 and say
reduces to e 2 in one step.
Let ,! be the reflexive and transitive closure of ,!. We say
reduces to e 2 (in many steps) if e 1 ,! e 2 . We omit the
standard proof for the following subject reduction theorem,
which uses Lemma 2.1.
Theorem 2.3 (Subject Reduction) Assume ;
derivable in ML ;
derivable in ML ;2.4 Erasure
We can simply transform ML ;into a language ML 0
by erasing all syntax related to type index expressions in
. Then ML 0 basically extends simply typed -
calculus with recursion. Let jej be the erasure of expression
e. We have e 1 reducing to e 2 in ML ;
reducing
to je 2 j in ML 0 . Therefore, if e is terminating in ML ;then jej is terminating in ML 0 . This is a crucial point since
the evaluation of a program in ML ;
0 is (most likely) done
through the evaluation of its erasure in ML 0 . Please find
more details on this issue in [18, 14].
3 For instance, it is suggested that one present the dynamic semantics in
the style of natural semantics and then later form the notion of reducibility
for evaluation rules.
We combine metrics with the dependent types in ML ;,
forming a language ML ;
. We then prove that every well-typed
program in ML ;
is terminating, which is the main
technical contribution of the paper.
3.1
We use for the usual lexicographic ordering on tuples
of natural numbers and < for the strict part of . Given
two tuples of natural numbers hi
holds if
. Evi-
dently, < is a well-founded. We stress that (in theory) there
is no difficulty supporting various other well-founded orderings
on natural numbers such as the usual multiset ordering.
We fix an ordering solely for easing the presentation.
Definition 3.1 (Metric) Let be a tuple of
index expressions and be an index variable context. We
say is a metric under if ' are derivable for
to mean is a metric
under .
A decorated type in ML ;
0; is of form ~a : ~
the following rule is for forming such types.
The syntax of ML ;
is the same as that of ML ;except
that a context in ML ;
maps every -variable f in its domain
to a decorated type and a recursive function in ML ;
is of form fun f [~a : ~
v. The process of
translating a source program into an expression in ML ;
is
what we call elaboration, which is thoroughly explained in
[18, 14]. Our approach to program termination verification
is to be applied to elaborated programs.
3.2 Dynamic and Static Semantics
The dynamic semantics of ML ;
is formed in precisely
the same manner as that of ML ;
0 and we thus omit all the
details.
The difference between ML ;
and ML ;lies in static
semantics. There are two kinds of typing judgments in
ML ;, which are of forms
0 . We call the latter a metric typing judgment, for which
we give some explanation. Suppose
and
roughly speaking, for each
free occurrence of f in e, f is followed by a sequence of
index expressions [~] such that [~a 7! ~], which we call
the label of this occurrence of f , is less than 0 under .
Now suppose we have a well-typed closed recursive function
0; and~ are of sorts ~
then f [~][f 7! holds; by the
rule (type-fun), we know that all labels of f in v are less than
[~a 7!~], which is the label of f in f [~]; since labels cannot
decrease forever, this yields some basic intuition on why all
recursive functions in ML ;
are terminating. However, this
intuitive argument is difficult to be formalized directly in the
presence of high-order functions.
The typing rules in ML ;
for a judgment of form ; '
are essentially the same as those in ML ;except the
following ones.
We present the rules for deriving metric typing judgments in
Figure
4. Given
means that for some 1 k <
are satisfied for all 1 j < k
is also satisfied.
Lemma 3.2 We have the following.
1. Assume ;
holds. Then we can derive ; '
2. Assume ; derivable and
dom(). Then
we can derive
Proof (1) and (2) are proven simultaneously by structural
induction on derivations of ;
Theorem 3.3 (Subject Reduction) Assume ;
derivable in ML ;
0; . If e ,! e 0 , then ; ' e
derivable in ML ;
Obviously, we have the following.
Proposition 3.4 Assume that D is a derivation ;
f 0 . Then then there is a derivation of ;
with the same height 4 as D.
3.3 Reducibility
We define the notion of reducibility for well-typed closed
expressions.
Definition 3.5 (Reducibility) Suppose that e is a closed expression
of type and e ,! v holds for some value v. The
reducibility of e is defined by induction on the complexity of
.
4 For a minor technicality reason, we count neither of the rules
(type-var) and (-var) when calculating the height of a derivation.
1. is a base type. Then e is reducible.
2. are reducible
for all reducible values v 1 of type .
3.
reducible if e[~] are reducible
4.
some i and v 1 such that v 1 is a reducible value of type
Note that reducibility is only defined for closed expressions
that reduce to values.
Proposition 3.6 Assume that e is a closed expression of type
and e ,! e 0 holds. Then e is reducible if and only if e 0 is
reducible.
Proof By induction on the complexity of .
The following is a key notion for handling recursion,
which, though natural, requires some technical insights.
Definition 3.7 (-Reducibility). Let e be a well-typed closed
recursive function fun f [~a : ~
be a
closed metric. e is 0 -reducible if e[~] are reducible for all
satisfying [~a 7!~] < 0 .
Definition 3.8 Let be a substitution that maps variables to
expressions; for every x 2 dom(), is x-reducible if (x)
is reducible; for every f 2 dom(), is (f; f )-reducible if
(f) is f -reducible.
In some sense, the following lemma verifies whether the
notion of reducibility is formed correctly, where the difficulty
probably lies in its formulation rather than in its proof.
Lemma 3.9 (Main Lemma) Assume that ; ' e : and
are derivable. Also assume that
is x-reducible for every x 2 dom() and for every f 2
derivable and
is (f; f )-reducible. Then e[ I ][] is reducible.
Proof Let D be a derivation of ; ' e : and we proceed
by induction on the height of D. We present the most
interesting case below. All other cases can be found in [16].
Assume that the following rule (type-fun) is last applied in
D,
where we have
and
Suppose that e I ][] is
not reducible. Then by definition there exist ~
1 such
that e [~ 0 ] is not reducible but e [~] are reducible for all
satisfying
~
In other words, e is f1 -
reducible for
that we can derive
Figure
4. Metric Typing Rules for ML ;
. By Proposition 3.4, there is a derivation D 1 of
such that the
height of D 1 is less than that of D. By induction hypothesis,
we have that v
Note that e [~ 0 ] ,! v and thus e [~ 0 ] is reducible, contradicting
the definition of~ 0 . Therefore, e is reducible.
The following is the main result of the paper.
Corollary 3.10 If ; ' e : is derivable in ML ;
in ML ;
is reducible and thus reduces to a value.
Proof The corollary follows from Lemma 3.9.
Extensions
We can extend ML ;
with some significant programming
features such as mutual recursion, datatypes and poly-
morphism, defining the notion of reducibility for each extension
and thus making it clear that Lemma 3.9 still holds
after the extension. We present in this section the treatment
of mutual recursion and currying, leaving the details in [16].
4.1 Mutual Recursion
The treatment of mutual recursion is slightly different
from the standard one. The syntax and typing rules for
handling mutual recursion are given in Figure 5. We use
the type of an expression representing n mutually
recursive functions of types respectively,
which should not be confused with the product of types
. Also, the n in e:n must be a positive (constant)
integer. Let v be the following expression.
funs
Then for every 1 k n, v:k is a redex, which reduces to
and we form a metric typing judgment ; ' e ~
f
0 for
verifying that all labels of f in e are less than 0 under
. The rules for deriving such a judgment are essentially
the same as those in Figure 4 except (-lab), which is given
below.
f in ~
f
The rule (-funs) for handling mutual recursion is straight-forward
and thus omitted.
Definition 4.1 (Reducibility) Let e be a closed expression of
reduces to v. e is reducible if e:k are
reducible for
4.2 Currying
A decorated type must so far be of form ~a : ~
and this restriction has a rather unpleasant consequence. For
types ::= j
expressions e ::= j e:n j funs f 1
values v ::= j funs f 1
~
f
f
(type-funs)
Figure
5. The Syntax and Typing Rules for Mutual Recursion
instance, we may want to assign the following type to the
implementation of Ackerman function in Figure 1:
fi:natg int(i) -> fj:natg int(j) -> int;
which is formally written as
If we decorate with a metric , then can only involve
the index variable a 1 , making it impossible to verify that the
implementation is terminating.
We generalize the form of decorated types to the following
so as to address the problem.
Also, we introduce the following form of expression e for
representing a recursive function.
We require that e 0 be a value if In the following, we
only deal with the case 1. For n > 1, the treatment is
similar. For
have e ,!
:e 0 and the following
typing rule
, and the following
metric typing rule
Definition 4.2 (-reducibility) Let e be a closed recursive
function
a closed metric. e is 0 -reducible if e[~ 1 ](v)[~] are reducible
for all reducible values
1 and
5 Practice
We have implemented a type-checker for ML ;
in a prototype
implementation of DML and experimented with various
examples, some of which are presented below. We also
address the practicality issue at the end of this section.
5.1 Examples
We demonstrate how various programming features are
handled in practice by our approach to program termination
verification.
Primitive Recursion The following is an implementation
of the primitive recursion operator R in Godel's T , which is
clearly typable in ML ;
. Note that Z and S are assigned
the types Nat(0) and
respectively.
datatype Nat with
Z(0) | {n:nat} S(n+1) of Nat(n)
u | R (S n) u (R n u v)
withtype
{n:nat} =>
(* Nat is for [n:nat] Nat(n) in a type *)
By Corollary 3.10, it is clear that every term in T is terminating
(or weakly normalizing). This is the only example in
this paper that can be proven terminating with a structural
ordering. The point we make is that though it seems
"evident" that the use of R cannot cause non-termination, it
is not trivial at all to prove every term in T is terminating.
Notice that such a proof cannot be obtained in Peano
arithmetic. The notion of reducibility is precisely invented
for overcoming the difficulty [12]. Actually, every term in
T is strongly normalizing, but this obviously is untrue in
0; .
Nested Recursive Function Call The program in Figure 6
involving a nested recursive function call implements Mc-
Carthy's ``91'' function. The withtype clause indicates
that for every integer x, f91(x) returns integer 91 if x 100
We informally explain why the
metric in the type annotation suffices to establish the termination
of f91; for the inner call to f91, we need to prove that
for which is obvious; for the outer
call to f91, we need to verify that 1
and max(0; 101 assumed
in ). Clearly, this example can not be handled with a
structural ordering.
Mutual Recursion The program in Figure 7 implements
quicksort on a list, where the functions qs and par are defined
mutually recursively. We informally explain why this
program is typable in ML ;
0; and thus qs is a terminating
function by Corollary 3.10.
For the call to par in the body of qs, the label is (0
1), where a is the length of xs 0 . So we need to
verify that is satisfied for
obvious.
For the two calls to qs in the body of par, we need to
verify that
of which hold since
This also indicates why we need r
of r in the metric for par.
For the two calls to par in the body of par, we need
to verify that
and
both of
which hold since
this example can not be handled with a structural ordering.
Higher-order Function The program in Figure 8 implements
a function accept that takes a pattern p and a string
s and checks whether s matches p, where the meaning of a
pattern is given in the comments.
The auxiliary function acc is implemented in continuation
passing style, which takes a pattern p, a list of characters
cs and a continuation k and matches a prefix of cs
against p and call k on the rest of characters. Note that k
is given a type that allows k to be applied only to a character
list not longer than cs. The metric used for proving
the termination of acc is hn; ii, where n is the size of p,
that is the number constructors in p (excluding Empty) and
i is the length of cs. Notice the call acc p cs 0 k in the
last pattern matching clause; the label attached to this call is
is the length of cs 0 ; we have i 0 i since the
continuation has the type a
where
must be false when this call hap-
pens; therefore we have It
is straightforward to see that the labels attached to other calls
to acc are less than hn; ii. By Corollary 3.10, acc is termi-
nating, which implies that accept is terminating (assuming
explode is terminating). In every aspect, this is a non-trivial
example even for interactive theorem proving systems.
Notice that the test length(cs 0 in the body
of acc can be time-consuming. This can be resolved by using
a continuation that accepts as its arguments both a character
list and its length. In [5], there is an elegant implementation
of accept that does some processing on the pattern to be
matched and then eliminates the test.
Run-time Check There are also realistic cases where termination
depends on a program invariant that cannot (or is difficult
to) be captured in the type system of DML. For instance,
the following example is adopted from an implementation of
bit reversing, which is a part of an implementation of fast
Fourier transform (FFT).
fun loop (j,
if (k<j) then loop (j-k, k/2) else j+k
withtype
{a:nat,b:nat} int(a) * int(b) -> int
Obviously, loop(1; 0) is not terminating. However, we may
know for some reason that the second argument of loop can
never be 0 during execution. This leads to the following im-
plementation, in which we need to check that k > 1 holds
before calling loop(j k; k=2) so as to guarantee that k=2 is
a positive integer.
fun loop (j,
else raise Impossible
withtype {a:nat,b:pos} <max(0, a-b)> =>
int(a) * int(b) -> int
It can now be readily verified that loop is a terminating func-
tion. This example indicates that we can insert run-time
checks to verify program termination, sometimes, approximating
a liveness property with a safety property.
5.2 Practicality
There are two separate issues concerning the practicality
of our approach to program termination verification, which
are (a) the practicality of the termination verification process
and (b) the applicability of the approach to realistic programs
5 Note that length(cs 0 ) and length(cs) have the types int(i 0 ) and
int(i), respectively, and thus length(cs has the type
depending on whether i 0 equals i.
Thus, can be inferred in the type system.
withtype
Figure
6. An implementation of McCarthy's ``91'' function
case xs of [] => [] | x :: xs' => par cmp (x, [], [], xs')
withtype ('a * 'a -> bool) -> {n:nat} <n,0> => 'a list(n) -> 'a list(n)
and('a) par cmp (x, l, r,
case xs of
| x' :: xs' => if cmp(x', x) then par cmp (x, x' :: l, r, xs')
else par cmp (x, l, x' :: r, xs')
withtype ('a * 'a -> bool) -> {p:nat,q:nat,r:nat} <p+q+r,r+1> =>
'a * 'a list(p) * 'a list(q) * 'a list(r) -> 'a list(p+q+r+1)
Figure
7. An implementation of quicksort on a list
It is easy to observe that the complexity of type-checking
in ML ;
is basically the same as in ML ;since the only
added work is to verify that metrics (provided by the pro-
are decreasing, which requires solving some extra
constraints. The number of extra constraints generated from
type-checking a function is proportional to the number of recursive
calls in the body of the function and therefore is likely
small. Based on our experience with DML, we thus feel that
type-checking in ML ;
is suitable for practical use.
As for the applicability of our approach to realistic pro-
grams, we use the type system of the programming language
C as an example to illustrate a design decision. Obviously,
the type system of C is unsound because of (unsafe) type
casts, which are often needed in C for typing programs that
would otherwise not be possible. In spite of this practice, the
type system of C is still of great help for capturing program
errors. Clearly, a similar design is to allow the programmer
to assert the termination of a function in DML if it cannot be
verified, which we may call termination cast. Combining termination
verification, run-time checks and termination cast,
we feel that our approach is promising to be put into practice.
6 Related Work
The amount of research work related to program termination
is simply vast. In this section, we mainly mention some
related work with which our work shares some similarity either
in design or in technique.
Most approaches to automated termination proofs for either
programs or term rewriting systems (TRSs) use various
heuristics to synthesize well-founded orderings. Such ap-
proaches, however, often have difficulty reporting comprehensible
information when a program cannot be proven ter-
minating. Following [13], there is also a large amount of
work on proving termination of logic programs. In [11], it is
reported that the Mercury compiler can perform automated
termination checking on realistic logic programs.
However, we address a different question here. We are
interested in checking whether a given metric suffices to establish
the termination of a program and not in synthesizing
such a metric. This design is essentially the same as the
one adopted in [10], where it checks whether a given structural
ordering (possibly on high-order terms) is decreasing in
an inductive proof or a logic program. Clearly, approaches
based on checking complements those based on synthesis.
Our approach also relates to the semantic labelling approach
[19] designed to prove termination for term rewriting
systems (TRSs). The essential idea is to differentiate
function calls with labels and show that labels are always
decreasing when a function call unfolds. The semantic labelling
approach requires constructing a model for a TRS to
verify whether labelling is done correctly while our approach
does this by type-checking.
The notion of sized types is introduced in [6] for proving
the correctness of reactive systems. There, the type system
is capable of guaranteeing the termination of well-typed
programs. The language presented in [6], which is designed
for embedded functional programming, contains a significant
restriction as it only supports (a minor variant) of primitive
recursion, which can cause inconvenience in programming.
For instance, it seems difficult to implement quicksort by using
only primitive recursion. From our experience, general
recursion is really a major programming feature that greatly
complicates program termination verification. Also, the notion
of existential dependent types, which we deem indispensable
in practical programming, does not exist in [6].
When compared to various (interactive) theorem proving
datatype pattern with
string matches Empty *)
| Char(1) of char (* "c" matches Char (c) *)
| {i:nat,j:nat} Plus(i+j+1) of pattern(i) * pattern(j)
(* cs matches Plus(p1, p2) if cs matches either p1 or p2 *)
| {i:nat,j:nat} Times(i+j+1) of pattern(i) * pattern(j)
(* cs matches Times(p1, p2) if a prefix of cs matches p1 and
the rest matches p2 *)
| {i:nat} Star(i+1) of pattern(i)
(* cs matches Star(p) if cs matches some, possibly 0, copies of p *)
(* 'length' computes the length of a list *)
length
| len
withtype
in
len (xs,
withtype {i:nat} <> => 'a list(i) -> int(i)
(* empty tuple <> is used since 'length' is not recursive *)
case p of
Empty => k (cs)
| Char(c) =>
(case cs of
| c' :: cs' => if
| Plus(p1, p2) => (* in this case, k is used for backtracking *)
if acc p1 cs k then true else acc p2 cs k
| Times(p1, p2) => acc p1 cs (fn cs' => acc p2 cs'
|
if k (cs) then true
else acc p0 cs (fn cs' =>
else acc p cs'
withtype {n:nat} pattern(n) ->
{i:nat} <n, i> => char list(i) ->
({i':nat | i' <= i} char list(i') -> bool) -> bool
(* 'explode' turns a string into a list of characters *)
withtype <> => pattern -> string -> bool
Figure
8. An implementation of pattern matching on strings
systems such as NuPrl [2], Coq [4], Isabelle [8] and PVS [9],
our approach to program termination is weaker (in the sense
that [many] fewer programs can be verified terminating) but
more automatic and less obtrusive to programming. We have
essentially designed a mechanism for program termination
verification with a language interface that is to be used during
program development cycle. We consider this as the main
contribution of the paper. When applied, the designed mechanism
intends to facilitate program error detection, leading
to the construction of more robust programs.
7 Conclusion and Future Work
We have presented an approach based on dependent types
in DML that allows the programmer to supply metrics for
verifying program termination and proven its correctness.
We have also applied this approach to various examples that
involve significant programming features such as a general
form of recursion (including mutual recursion), higher-order
functions, algebraic datatypes and polymorphism, supporting
its usefulness in practice.
A program property is often classified as either a safety
property or a liveness property. That a program never performs
out-of-bounds array subscripting at run-time is a safety
property. It is demonstrated in [17] that dependent types in
DML can guarantee that every well-typed program in DML
possesses such a safety property, effectively facilitating run-time
array bound check elimination. It is, however, unclear
(a priori) whether dependent types in DML can also be used
for establishing liveness properties. In this paper, we have
formally addressed the question, demonstrating that dependent
types in DML can be combined with metrics to establish
program termination, one of the most significant liveness
properties.
Termination checking is also useful for compiler opti-
mization. For instance, if one decides to change the execution
order of two programs, it may be required to prove
that the first program always terminates. Also, it seems feasible
to use metrics for estimating the time complexity of
programs. In lazy function programming, such information
may allow a compiler to decide whether a thunk should be
formed. In future, we expect to explore along these lines of
research.
Although we have presented many interesting examples
that cannot be proven terminating with structural orderings,
we emphasize that structural orderings are often effective in
practice for establishing program termination. Therefore, it
seems fruitful to study a combination of our approach with
structural orderings that handles simple cases with either automatically
synthesized or manually provided structural orderings
and verifies more difficult cases with metrics supplied
by the programmer.
--R
Termination of rewriting systems by polynomial interpretations and its implementation.
Implementing Mathematics with the NuPrl Proof Development System.
Orderings for term rewriting systems.
Proving the correctness of reactive systems using sized types.
The higher-order recursive path ordering
A Generic Theorem Prover.
PVS: Combining specification
Termination and Reduction Checking in the Logical Framework.
Termination Analysis for Mercury.
Intensional Interpretations of Functionals of Finite Type I.
Efficient tests for top-down termination of logic rules
Dependent Types in Practical Programming.
Dependently Typed Data Structures.
Dependent Types for Program Termination Verifica- tion
Eliminating array bound checking through dependent types.
Dependent types in practical program- ming
Termination of term rewriting by semantic la- belling
--TR
--CTR
Kevin Donnelly , Hongwei Xi, A Formalization of Strong Normalization for Simply-Typed Lambda-Calculus and System F, Electronic Notes in Theoretical Computer Science (ENTCS), v.174 n.5, p.109-125, June, 2007
Chiyan Chen , Hongwei Xi, Combining programming with theorem proving, ACM SIGPLAN Notices, v.40 n.9, September 2005
Kevin Donnelly , Hongwei Xi, Combining higher-order abstract syntax with first-order abstract syntax in ATS, Proceedings of the 3rd ACM SIGPLAN workshop on Mechanized reasoning about languages with variable binding, p.58-63, September 30-30, 2005, Tallinn, Estonia
Amir M. Ben-Amram , Chin Soon Lee, Program termination analysis in polynomial time, ACM Transactions on Programming Languages and Systems (TOPLAS), v.29 n.1, p.5-es, January 2007
Arne John Glenstrup , Neil D. Jones, Termination analysis and specialization-point insertion in offline partial evaluation, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.6, p.1147-1215, November 2005 | dependent types;termination |
609238 | Axioms for Recursion in Call-by-Value. | We propose an axiomatization of fixpoint operators in typed call-by-value programming languages, and give its justifications in two ways. First, it is shown to be sound and complete for the notion of uniform T-fixpoint operators of Simpson and Plotkin. Second, the axioms precisely account for Filinski's fixpoint operator derived from an iterator (infinite loop constructor) in the presence of first-class continuations, provided that we define the uniformity principle on such an iterator via a notion of effect-freeness (centrality). We then explain how these two results are related in terms of the underlying categorical structures. | Introduction
While the equational theories of xpoint operators in call-by-name programming
languages and in domain theory have been extensively studied
and now there are some canonical axiomatizations (including the
iteration theories [1] and Conway theories, equivalently traced cartesian
categories [12] { see [27] for the latest account), there seems no such
widely-accepted result in the context of call-by-value (cbv) programming
languages, possibly with side eects. Although the implementation of
recursion in \impure" programming language has been well-known, it
seems that the underlying semantic nature of recursive computation in
the presence of side-eects has not been studied at a suciently general
level. Regarding the widespread use of call-by-value programming
languages and the importance of recursion in real life programming, it
is desirable to have theoretically motivated and justied principles for
reasoning about recursive computation in a call-by-value setting.
In this paper we propose a candidate of such an axiomatization,
which consists of three simple axioms, including a uniformity principle
analogous to that in the call-by-name setting. Our axiomatization, of
stable uniform call-by-value xpoint operators to be introduced below,
is justied by the following two main results:
y An extended abstract of this work appeared in Proc. Foundations of Software
Science and Computation Structures (FoSSaCS 2001), Springer LNCS Vol. 2030.
Hasegawa and Kakutani
1. The c -calculus (computational lambda calculus) [18] with a stable
uniform cbv xpoint operator is sound and complete for the models
based on the notion of uniform T -xpoint operators of Simpson and
Plotkin [27].
2. In the call-by-value -calculus [25] (= the c -calculus plus rst-
class continuations) there is a bijective correspondence between
stable uniform cbv xpoint operators and uniform iterators, via
Filinski's construction of recursion from iteration [5].
The notion of uniform T-xpoint operators arose from the context of
Axiomatic Domain Theory [7, 26]. By letting T be a lifting monad on
a category of predomains, a uniform T-xpoint operator amounts to
a uniform xpoint operator on domains (the least xpoint operator in
the standard order-theoretic setting). In general, T can be any strong
monad on a category with nite products, thus a uniform T-xpoint
operator makes sense for any model of the computational lambda calculus
in terms of strong monads [18], and Simpson and Plotkin [27]
suggest the possibility of using uniform T-xpoint operators for modelling
call-by-value recursion. This line of considerations leads us to
our rst main result. In fact, we distill our axioms from the uniform
T-xpoint operators.
A surprise is the second one, in that the axioms precisely account for
Filinski's cbv xpoint operator derived from an iterator (innite loop
constructor) and rst-class continuations, provided that we rene Filin-
ski's notion of uniformity, for which the distinction between values and
eect-insensitive programs (characterised by the notion of centrality)
[22, 28, 10] is essential. Using our axioms, we establish the bijectivity
result between xpoint operators and iterators. Therefore here is an
interesting coincidence of a category-theoretic axiomatics (of Simpson
and Plotkin) with a program construction (of Filinski).
However, we also show that, after sorting out the underlying categorical
semantics, Filinski's construction combined with the Continuation-Passing
Style (CPS) transformation can be understood within the abstract
setting of Simpson and Plotkin. The story is summarised as
follows. As noted by Filinski, the CPS-transform of an iterator is a
usual (call-by-name) xpoint operator on the types of the form R A in
the target -calculus, where R is the answer type. If we let T be the
continuation monad R R (
, then the uniform T-xpoint operator precisely
amounts to the uniform xpoint operator on the types R A . Since
our rst main result is that the stable uniform cbv xpoint operator is
sound and complete for such uniform T-xpoint operators, it turns out
that Filinski's construction combined with the CPS transformation can
be regarded as a consequence of the general categorical axiomatics; by
Axioms for Recursion in Call-by-Value 3
specialising it to the setting with a continuation monad, we obtain a
semantic version of the second main result.
Construction of this paper
In Section 2 we recall the c -calculus and the call-by-value -calculus,
which will be used as our working languages in this paper. In Section
3 we introduce our axioms for xpoint operators in these calculi (De-
nition and give basic syntactic results. Section 4 demonstrates how
our axioms are used for establishing Filinski's correspondence between
recursion and iteration (which in fact gives a syntactic proof of the
second main result). Up to this section, all results are presented in an
entirely syntactic manner. In Section 5 we start to look at the semantic
counterpart of our axiomatization, by recalling the categorical models
of the c -calculus and the call-by-value -calculus. We then recall the
notion of uniform T-xpoint operators on these models in Section 6,
and explain how our axioms are distilled from the uniform T-xpoint
operators (Theorem 2, the rst main result). In Section 7, we specialise
the result in the previous section to the models of the call-by-value -
calculus, and give a semantic proof of the second main result (Theorem
4). Section 8 gives some concluding remarks.
2. The Call-by-Value Calculi
The c -calculus (computational lambda calculus) [18], an improvement
of the call-by-value -calculus [21], is sound and complete for
1. categorical models based on strong monads (Moggi [18])
2. Continuation-Passing Style transformation into the -calculus
(Sabry and Felleisen [23])
and has proved useful for reasoning about call-by-value programs. In
particular, it can be seen as the theoretical backbone of (the typed
version of) the theory of A-normal forms [8], which enables us to
optimise call-by-value programs directly without performing the CPS
transformation.
For these reasons, we take the c -calculus as a basic calculus for
typed call-by-value programming languages. We also use an extension
of the c -calculus with rst-class continuations, called the call-by-
value -calculus, for which the soundness and completeness results
mentioned above have been extended by Selinger [25].
4 Hasegawa and Kakutani
2.1. The c -calculus
The syntax, typing rules and axioms on the well-typed terms of the
c -calculus are summarised in Figure 1. The types, terms and typing
judgements are those of the standard simply typed lambda calculus
(including the unit > and binary products ). 1 c ranges over the
constants of type . As an abbreviation, we write let x be M in N for
denotes the set of free variables in M . (As long as
there is no confusion, we may use italic small letters for both variables
and values. Capital letters usually range over terms, though we may
also use some capital letters like F , G, H for higher-order functional
values.) The crucial point is that we have the notion of values, and the
axioms are designed so that the above-mentioned completeness results
hold. Below we may call a term a value if it is provably equal to a value
dened by the grammar. We write g f for the composition x:g (f x)
of values f and g, and id for the identity function x :x.
In the sequel, we are concerned not just about the pure c -calculus
but also about its extensions with additional constructs and axioms.
A c -theory is a typed equational theory on the well-typed expressions
of the c -calculus (possibly with additional constructs) which is
a congruence on all term constructions and contains the axioms of the
c -calculus. A c -theory can be typically specied by the additional
axioms (as the congruence generated from them), or as the equational
theory induced by a model in the sense of Section 5, i.e. '
Centre and focus
In call-by-value languages, we often regard values as representing eect-
(nished or suspended) computations. While this intuition is valid,
the converse may not always be justied; in fact, the answer depends
on the computational eects under consideration.
DEFINITION 1 (centre, focus). In a c -theory, we say that a term
central if it commutes with any other computational eect,
that is,
let x be M in let y be N in be N in let x be M in L :
holds for any N : and L : , where x and y are not free in M and
N . In addition, we say that it is central and moreover
copyable and discardable, i.e., let x be M in hx;
and let x be M in
1 We do not include the \computation types" T and associated constructs, as
they can be dened by
Axioms for Recursion in Call-by-Value 5
Types ranges over base types
Terms
Typing Rules:
Axioms:
let x be V in
let x be M in
let y be (let x be L in M) in be L in let y be M in N
be M in let x be N in f x
be M in let y be N in hx; yi
where let x be M in N stands for
Figure
1. The c-calculus
It is worth emphasising that a value is always focal, but the converse
is not true (see Section 7.3). A detailed analysis of these concepts in
several c -theories is found in [10]; see also discussions in Section 5.
2.2. The call-by-value -calculus
Our call-by-value -calculus, summarised in Figure 2, is the version
due to Selinger [25]. We regard it as an extension of the c -calculus
with rst-class continuations and sum types (the empty type ? and
binary sums +). We write : for the type
The typing judgements take the form '
is a sequence of names (ranged over by , ,. )
with their types. A judgement
represents a well-typed term M with at most m free
We write FN(M) for
6 Hasegawa and Kakutani
Types
Terms
Additional Typing Rules:
Additional Axioms:
be M in [; ]x
Figure
2. The call-by-value -calculus
the set of free names in M . In this judgement, M can be thought of
as a proof of the sequent or the proposition
in the classical propositional
logic. Among the additional axioms, the rst one involves the mixed
substitution M [C( )=[]( )] for a term M , a context C( ) and a name
, which is the result of recursively replacing any subterm of the form
[]N by C(N) and any subterm of the form [
or further details on these
syntactic conventions.
Remark 1. We have chosen the cbv -calculus as our working language
rstly because we intend the results in this paper to be compatible
with the duality result of the second author [15] (see Section 7)
which is based on Selinger's work on the -calculus [25], and secondly
because it has a well-established categorical semantics, again thanks to
Selinger. However our results are not specic to the -calculus; they
apply also to any other language with similar semantics { for example,
we could have used Hofmann's axiomatization of control operators [13].
Also, strictly speaking, the inclusion of sum types (coproducts) is not
Axioms for Recursion in Call-by-Value 7
necessary in the main development of this paper, though they enable
us to describe iterators more naturally (as general feedback operators,
see Remark 3 in Section 4) and are also used in some principles on
iterators like diagonal property (see Section 8), and crucially needed
for the duality result in [25, 15].
Example 1. As an example, we can dene terms for the \double-
negation elimination" and the \initial map":
One can check that these combinators satisfy Hofmann's axioms in [13].
Also, we can use C (as will be done
in the programming example in SML/NJ in Section 4).
Centre and focus
In the presence of rst-class continuations, central and focal terms
coincide [28, 25], and enjoy a simple characterisation (thunkability [28]).
LEMMA 1. In a cbv -theory, the following conditions on a term
are equivalent.
1. M is central.
2. M is focal.
3. (thunkability) let x be M in u >
We also note that central terms and values agree at function types [25].
LEMMA 2. In a cbv -theory, a term M : ! is central if and
only if it is a value, i.e., (with x not free in M) holds.
3. Axioms for Recursion
Throughout this section, we work in a c -theory.
3.1. Rigid functionals
The key in our axiomatization of call-by-value xpoint operators is
the notion of uniformity. In the call-by-name setting, we dene the
8 Hasegawa and Kakutani
uniformity for xpoint operators with respect to the strict maps, i.e.,
those that preserve the bottom element (divergence). In a call-by-value
setting, however, we cannot dene uniformity via this particular notion
of strict maps, simply because everything is strict { if an input does not
terminate, the whole program cannot terminate. Instead we propose to
dene the uniformity principle with respect to a class of functionals
that use their argument functions in a constrained way.
called rigid if H (x:M holds for any M : ! , where
x and y are not free in H and M .
The word \rigid" was coined by Filinski in [5] (see discussions in Section
7.3). Intuitively, a rigid functional uses its argument exactly once, and
it does not matter whether the argument is evaluated beforehand or
evaluated at its actual use.
LEMMA 3.
are rigid, so is H 0
LEMMA 4. If H
holds (where f and y are not free in H).
Example 2. The reader may want to see rigid functionals in more
concrete ways. In the case of settings with rst-class continuations, we
have such a characterisation of rigid functionals, see Section 7.3. In
general cases, a rigid functional typically takes the following form:
:let x be f (h y) in N
the following property: h v is central for any
{ later, such an h will be called \total" (Denition 4). N can
be any term, possibly with side eects. It is easily seen that such an H
y in any c -theory. On the other hand,
in the presence of side eects, many purely functional terms fail to be
rigid { e.g., constant functionals, as well as functionals like f:f f .
3.2. Axioms for recursion
Now we are ready to state the main denition of this paper: our axiomatization
of the call-by-value xpoint operators.
Axioms for Recursion in Call-by-Value 9
DEFINITION 3 (stable uniform call-by-value xpoint operator). A type-indexed
family of closed values x v
is called a stable uniform call-by-value xpoint operator if the following
conditions are satised:
1. (cbv xpoint) For any value F
(where x is not free in F )
2. (stability) For any value F
(where f , x are not free in F )
3. (uniformity) For values F
holds, then H
F
G
The rst axiom is known as the call-by-value xpoint equation; the
eta-expansion in the right-hand-side means that x v
F is equal to
a value. The second axiom says that, though the functionals F and
may behave dierently, their xpoints, when applied to
values, satisfy the same xpoint equation and cannot be distinguished.
The last axiom is a call-by-value variant of Plotkin's uniformity prin-
ciple; here the rigid functionals play the r^ole of strict functions in the
uniformity principle for the call-by-name xpoint operators. Our uniformity
axiom can be justied by the fact that H (x v
same xpoint equation as x v
holds:
xpoint equation for x v
The following consideration conrms that the rigidness assumption cannot
be dropped from the uniformity axiom. Let H
be any value so that holds. Take any term M of type
Hasegawa and Kakutani
by be M in y:H f y (with f , g not free
in M ). Then we have H H, and the uniformity would ask
G to be hold; it is easily seen that x v
and x v y, hence H must be rigid.
Remark 2. As easily seen, the uniformity implies that any rigid functional
preserves the xpoints of the identity maps: H
. It is tempting to
dene a notion of \call-by-value strictness" as this preservation of the
xpoints of identities. In the pure functional settings (like the call-by-
value PCF) where the divergence is the only eect, this call-by-value
strictness actually coincides with the rigidness (this can be veried by
inspecting the standard domain-theoretic model; see also Section 6).
However, in the presence of other eects (in particular the case with
rst-class continuations which we will study in the Section 4), rigidness
is a much stronger requirement than this call-by-value strictness; for
instance, the constant functional f:x v
id as well as the \twice" functional
are call-by-value strict (hence rigid in a pure functional
setting), but they are not rigid in many c -theories and cannot be used
in the uniformity principle.
3.3. On the axiomatizations of uniformity
There are some alternative ways of presenting the axioms of stable
uniform cbv xpoint operators. In particular, in [5] Filinski proposed a
single uniformity axiom which amounts to our stability and uniformity
axioms.
LEMMA 5. For values F
and g, y are not free in G).
F
G
Proof.
Axioms for Recursion in Call-by-Value 11
PROPOSITION 1. The stability axiom and uniformity axiom are equivalent
to the following Filinski's uniformity axiom [5]:
For
(with x, f not free in F ), then H G.
G
Proof. Stability and uniformity imply Filinski's uniformity, because,
stability
Conversely, Filinski's uniformity implies stability and uniformity. First,
for a value F
is rigid, by Filinski's uniformity
we have the stability x v
For uniformity, suppose that we have values F
holds. Then, by Lemma 5, we have H (f:x:F f
(g:y:G g y) H. By applying Filinski's uniformity axiom, we obtain
Since we have already seen that stability follows from Filinski's unifor-
mity, it follows that x v
we have H
4. Recursion from Iteration
For grasping the r^ole of our axioms, it is best to look at the actual
construction in the second main result: the correspondence of recursors
and iterators in the presence of rst-class continuations due to Filinski
[5]. So we shall describe this syntactic development before going into
the semantic investigation which is the main issue of this paper. In this
section we work in a call-by-value -theory, unless otherwise stated.
12 Hasegawa and Kakutani
4.1. Axioms for iteration
As the case of recursion, we introduce a class of functions for determining
the uniformity principle for an iterator.
DEFINITION 4 (total value). In a c -theory, a value
called is central for any value v : .
The word \total" is due to Filinski [5], though in his original denition
h v is asked to be a value rather than a central term. 3
DEFINITION 5 (uniform iterator). A type-indexed family of closed values
called a uniform iterator if the following
conditions are satised:
1. (iteration) For any value f
(i.e. loop
2. (uniformity) For values
is total and h
f
h, we have (loop
So it is natural to expect that (loop g) h behaves in the same way as
loop f for \well-behaved" h. The uniformity axiom claims that this is
the case when h is total.
It seems that this totality assumption is necessary. For example, let
always-jumping function which
is not total); then (loop g) performs the jump to the label
, while loop f just diverges.
3 We shall warn that there is yet another use of the word \total" by Filinski [4]
where a term is called total when it is discardable (in the sense of Denition 1); see
[29] for a detailed analysis on this concept. Another possible source of confusion is
that our notion of totality does not correspond to the standard notions of \total
relations" or \total maps (in domain theory)". However, in this paper we put our
priority on the compatibility with Filinski's development in [5].
Axioms for Recursion in Call-by-Value 13
Remark 3. Despite of its very limited form, the expressive power of
an iterator is not so weak, as we can derive a general feedback operator
from an iterator using sums and rst-class continuations, which satises
(with a syntax sugar for sums)
case f a of (in 1 x
for
4.2. The construction
Surprisingly, in the presence of rst-class continuations, there is a bijective
correspondence between the stable uniform cbv xpoint operators
and the uniform iterators. We recall the construction which is
essentially the same as that in [5].
The construction is divided into two parts. For the rst part, we
introduce a pair of contravariant constructions:
Note that here we need rst-class continuations to implement step ;
(it has \classical" type). One can easily verify that
LEMMA 6.
holds.
pets holds.
LEMMA 7.
For values
is rigid or F is total.
pets
The following observation implies that the two notions of uniformity
for recursors and iterators are intimately related by this contravariant
correspondence.
14 Hasegawa and Kakutani
LEMMA 8. step bijective correspondence
between rigid functionals of total functions of ! .
Proof. The only non-trivial part is that step ; ( ) sends a rigid
functional to a total function (the other direction and the bijectivity
follow immediately from Lemma 4 and Lemma 6). Suppose that H :
rigid. We show that step
central. This can be veried as follows:
let u be :H (y:[]y) x in let v be M in N
be M in N))x
be M in []N) x
let v be M in let u be :H (y:[]y) x in N
let v be M in :H (u:[]N) x
:(let v be M in H (u:[]N) x)
be M in u:[]N) x
Since H is rigid, it follows that
H (u:let v be M in []N) be M in u:[]N) x:We are then able to see that, if loop is a uniform iterator, the
composition
loop step
yields a stable uniform xpoint operator restricted on the negative
types :. The cbv xpoint axiom is veried as (by noting the equation
(loop step ;
The stability axiom holds as step (f:x:F f . The uniformity
axiom follows from Lemma 7 and Lemma 8. If H :!: F
G :!: H :!: and H is rigid (hence total by Lemma 4), the rst half
of Lemma 7 implies (step
Since step ; H is total by Lemma 8, by the uniformity of loop we have
loop ; (step
Axioms for Recursion in Call-by-Value 15
Conversely, if x v is a stable uniform xpoint operator,
gives a uniform iterator:
Again, the uniformity is a consequence of Lemma 7 and Lemma 8. One
direction of the bijectivity of these constructions is guaranteed by the
stability axiom (while the other direction follows from step ; pets
So we have established
PROPOSITION 2. There is a bijective correspondence between uniform
iterators and stable uniform cbv xpoint operators restricted on
negative types.
The second part is to reduce xpoints on an arrow type ! to
those on a negative type This is possible because we can
implement a pair of isomorphisms between these types (again using
rst-class continuations):
switch 1
It is routinely seen that both switch 1
switch
hold. It is also easy to verify (by direct
calculation or by applying Proposition 8 in Section 7) that
LEMMA 9. switch ; and switch 1
are rigid.
By applying the uniformity axiom to the trivial equation switch ;
(switch 1
PROPOSITION 3. There is a bijective correspondence between stable
uniform cbv xpoint operators restricted on negative types and those on
general function types.
Hasegawa and Kakutani
Proof. From a stable uniform cbv xpoint operator restricted on
negative types, one can dene that on general function types by taking
the equation above as denition; because of the uniformity, this in fact
is the unique possibility of extending the operator to that on all function
types. The only nontrivial point is that the uniformity axiom on this
dened xpoint operator on general function typed can be derived from
the uniformity axiom on the xpoint operator on negative types, which
we shall spell out below. Suppose that we have values F
such that H holds. Since rigid functionals are closed under
composition (Lemma 3) and switch and switch 1 are rigid (Lemma 9),
switch 1 H switch is also rigid. By applying the uniformity axiom
(on negative types) to the equation
(switch 1
(switch 1
we obtain
switch 1
which implies (by applying switch to both sides of the equation)
In summary, we conclude that, in the presence of rst-class continu-
ations, stable uniform cbv xpoint operators are precisely those derived
from uniform iterators, and vice versa:
THEOREM 1. There is a bijective correspondence between uniform
iterators and stable uniform cbv xpoint operators.
switch
loop
code written in SML/NJ [17, 11] is found in Figure 3.
Axioms for Recursion in Call-by-Value 17
(* an empty type "bot" with an initial map A : bot -> 'a *)
datatype
fun A (VOID
(* the C operator, C : (('a -> bot) -> bot) -> 'a *)
(* basic combinators *)
fun step F
fun switch l
(* an iterator, loop : ('a -> 'a) -> 'a -> bot *)
(* recursion from iteration *)
Figure
3. Coding in SML/NJ (versions based on SML '97 [17])
5. Categorical Semantics
The rest of this paper is devoted to investigating the semantic counterpart
of our stable uniform cbv xpoint operators and for giving
our two main results in a coherent way. In this section we recall some
preliminaries on the underlying categorical structures which will be
used in our semantic development.
5.1. Models of the c -calculus
Let C be a category with nite products and a strong monad
and are the unit and multiplication of the monad
T , and is the tensorial strength with respect to the nite products of C
(see e.g. [18, 19] for these category-theoretic concepts). We write C T for
the Kleisli category of T , and for the associated left adjoint
explicitly, J is the identity on objects and sends f 2 C (X; Y )
to Y f 2 C T (X; Y We assume that C has Kleisli
exponentials, i.e., for every X in C the functor J((
Hasegawa and Kakutani
has a right adjoint X This gives the structure for
modelling computational lambda calculus [18]. Specically, we x an
object for each base type b and dene the interpretation of types
as well-typed
interpreted inductively as a morphism of
once we x the interpretations of constants; see
Appendix
A for a summary. Following Moggi, we call such a structure
a computational model.
PROPOSITION 4. [18] The computational models provide a sound
and complete class of models of the computational lambda calculus.
In fact, we can use the c -calculus as an internal language of a computational
model { up to the choice of the base category C (which
may correspond to either syntactically dened values, or more semantic
values like thunkable terms, or even something between them; see [10]
for a detailed consideration on this issue) { in a similar sense that
the simply typed lambda calculus is used as an internal language of a
cartesian closed category [16].
5.2. Models of the call-by-value -calculus
Let C be a distributive category, i.e., a category with nite products and
coproducts so that preserves nite coproducts for each
A. We call an object R a response object if there exists an exponential
R A for each A, i.e., C ( A; R) ' C ( ; R A ) holds. Given such a
structure, we can model the cbv -calculus in the Kleisli category C T
of the strong monad
[25]. A term ' M : j is interpreted
as a morphism of C T ([[]];
for The interpretation is in fact a typed
version of the call-by-value CPS transformation [21, 25], as sketched in
Appendix
B. Following Selinger, we call C a response category and the
Kleisli category C T a category of continuations and write R C for C T
(though in [25] a category of continuations means the opposite of R C ).
PROPOSITION 5. [25] The categories of continuations provide a sound
and complete class of models of the cbv -calculus.
As the case of the c -calculus, we can use the cbv -calculus as an
internal language of a category of continuations [25].
Axioms for Recursion in Call-by-Value 19
5.3. Centre and focus
We have already seen the notion of centre and focus in the c -calculus
and the cbv -calculus in a syntactic form (Denition 1). However,
these concepts originally arose from the analysis on the category-theoretic
models given as above. Following the discovery of the premonoidal
structure on the Kleisli category part C T (R C ) of these models [22],
Thielecke [28] proposed a direct axiomatization of R C not depending
on the base category C (which may be seen as a chosen category of
\values") but on the subcategory of \eect-free" morphisms of R C ,
which is the focus (equivalently centre) of R C . Fuhrmann [10] carries
out further study on models of the c -calculus along this line.
DEFINITION 6 (centre, semantic denition). Given a computational
model with the base category C and the strong monad T , an arrow
called central if, for any g :
compositions (Y g) (f
Note that the products are not necessarily bifunctorial on C T ; they
form premonoidal products in the sense of [22] (the reader familiar
with this notion might prefer to
use
instead of for indicating that
they are not cartesian products). This notion of centrality amounts to
the semantic version of centrality in Denition 1.
In this paper we do not go into the further details of these semantic
analyses. However, we will soon see that these concepts naturally arise
in our analysis of the uniformity principles for recursors and iterators.
In particular, a total value (equivalently the term x : '
precisely corresponds to the central morphisms in the semantic
models. In the case of the models of the cbv -calculus, the centre
can be characterised in terms of the category of algebras, for which our
uniformity principles are dened; that is, we have
PROPOSITION 6. f central if and only
if its counterpart in C is an algebra morphism from the algebra (R B
to (R A ; R A ).
We discuss more about this in Section 7; there this observation turns
out to be essential in relating the uniformity principles for recursion
and iteration in the cbv -theories. We note that this result has been
observed in various forms in [28, 25, 10].
4 In terms of C , f 2 C T (X; Y
holds for any g 2 C T (X are the
left-rst and right-rst pairings (Appendix A).
20 Hasegawa and Kakutani
6. Uniform T -Fixpoint Operators
In this section we shall consider a computational model with the base
category C and a strong monad T .
6.1. Uniform T-Fixpoint Operators
We rst recall the notion of uniform T -xpoint operator of Simpson
and Plotkin [27], which arose from considerations on xpoint operators
in Axiomatic Domain Theory (ADT) [7, 26]. In ADT, we typically
start with a category C of predomains, for example the category of
!-complete partial orders (possibly without bottom) and continuous
functions. Then we consider the lifting monad T on C , which adds a
bottom element to !-cpo's. Then objects of the form TX are pointed
cpo's (!-cpo's with bottom), on which we have the least xpoint op-
erator. It is also easily checked that such a pointed cpo has a unique
T -algebra structure (in fact any T -algebra arises in this way in this
setting, though we will soon see that this is not the case if we take a
continuation monad as T ), and an algebra morphism is precisely the
bottom-preserving maps, i.e., the strict ones. As is well known, the least
xpoint operator enjoys the uniformity principle with respect to such
strict maps. By abstracting this situation we have:
DEFINITION 7 (uniform T-xpoint operator [27]). A T-xpoint operator
on C is a family of functions
such that, for any f : holds. It is called
uniform if, for any f :
imply
TTY TY TY TY
f
Thus a T-xpoint operator is given as a xpoint operator restricted
on the objects of the form TX. One may easily check that, in the
domain-theoretic example sketched as above, the condition h
Th says that h is a strict map.
Axioms for Recursion in Call-by-Value 21
This limited form of xpoint operators, however, turns out to be
sucient to model a call-by-value xpoint operator. To see this, suppose
that we are given an object A with a T -algebra structure :
(that is, we ask
we have ( f
Therefore we can extend a T-xpoint operator ( ) to be a xpoint
objects with T -algebra structure by dening f
Moreover, given a uniform T-xpoint operator ( ) , it is easy to see
that this extended xpoint operator ( ) on T -algebras is uniform in
the following sense: for T -algebras
h is a T -algebra morphism) and g
f
Furthermore, such a uniform extension is unique: given a uniform
xpoint operator ( ) on objects with T -alpgebra structure, by applying
this uniformity to ( f
completely
determined by its restriction on free algebras (TX;X ), i.e., a uniform
T-xpoint operator.
In particular, Kleisli exponentials X ) Y t in this scheme, where
the T -algebra structure given as the
adjoint mate (currying) of
(see
Appendix
A for notations). Since we interpret a function type as
a Kleisli exponential, this fact enables us to use a uniform T-xpoint
operator for dealing with a xpoint operator on function types.
We note that corresponds to
an eta-expansion in the c -calculus. That is, if a term '
represents an arrow f
22 Hasegawa and Kakutani
LEMMA 10. For any ' M : ! ,
holds.
Proof.
This observation is frequently used in distilling the axioms of the stable
uniform cbv xpoint operators below.
6.2. Axiomatization in the c -calculus
Using the c -calculus as an internal language of C T , the equation f
f f on X ) Y can be represented as
The side condition means that F corresponds
to an arrow in C (X
the operator ( can be
equivalently axiomatized by a slightly dierent operator
subject to f z = X;Y f f z , with an additional condition f
X;Y f) z . In fact, we can dene such a ( ) z as ( X;Y ( )) and
conversely and it is easy to see that these are in
bijective correspondence. The condition f equivalently
z , is axiomatized in the c -calculus as (by recalling
that X;Y ( ) gives an eta-expansion)
which is precisely the cbv xpoint axiom. The additional condition
f) z is axiomatized as
F is a value
This is no other than the stability axiom. We thus obtain the rst two
axioms of our stable uniform cbv xpoint operators, which are precisely
modelled by T-xpoint operators.
Axioms for Recursion in Call-by-Value 23
6.3. Uniformity axiom
Next, we shall see how the uniformity condition on T-xpoint operators
can be represented in the c -calculus. Following the previous discus-
sions, we consider H it is an algebra
morphism from (X ) Y; X;Y ) to (X 0 Spelling out this
condition, we ask H to satisfy H equivalently
In terms of the c -calculus,
this means that an eta-expansion commutes with the application of H;
therefore, in the c -calculus, we to be a
value such that
holds for any M : We have called such an H rigid, and dened
the uniformity condition with respect to such rigid functionals.
Remark 4. Actually the uniformity condition obtained by the argument
above is as follows, which is slightly weaker than stated in
Denition 3:
For
G.
However, thanks to Lemma 5, we can justify the uniformity axiom in
Denition 3.
6.4. Soundness and completeness
Now we give one of the main result of this paper.
THEOREM 2. The computational models with a uniform T -xpoint
operator provide a sound and complete class of models of the computational
lambda calculus with a stable uniform call-by-value xpoint
operator.
5 A characterisation of rigid functions (on computation types) in the same spirit
is given in Filinski's thesis [6] (Section 2.2.2) though unrelated to the uniformity of
Hasegawa and Kakutani
This extends Proposition 4 with the stable uniform call-by-value
xpoint operator and uniform T-xpoint operators. Most part of soundness
follows from a routine calculation. However, the interpretation of
the stable uniform call-by-value xpoint operator and the verication
of the axioms do require some care: we need to consider a parameterized
xpoint operator (with parameterized uniformity) for interpreting
the free variables. Thus we have to parameterize the considerations in
Section 6.1. This can be done along the line of Simpson's work [26].
Below we outline the constructions and results needed for our purpose.
PROPOSITION 7. A uniform T -xpoint operator uniquely extends to
a family of functions
where X ranges over objects of C and
such that
1. (parameterized xpoint) For
f
holds
2. (parameterized uniformity) For
Th X;A and h h X;A imply g
ThX;A
A
Here we only give the construction of f
A and omit the proof (which largely consists of lengthy
diagram chasings and we shall leave it for interested readers { see also
[26]). Let
(recall that is the T -algebra structure on
Using , we dene dfe
Axioms for Recursion in Call-by-Value 25
Finally we have f
A. By trivialising
the parameterization and by considering just the free algebras, one can
recover the original uniform T-xpoint operator. The uniqueness of the
extension follows from the uniformity (essentially in the same way as
described in Section 6.1).
Using this parametrically uniform parameterized xpoint operator,
now it is not hard to interpret a stable uniform call-by-value xpoint
operator in a computational model with a uniform T-xpoint operator
and check that all axioms are validated.
Completeness is shown by constructing a term model, for which
there is no diculty. Since the uniform T-xpoint operator on this
term model is directly dened by the stable uniform call-by-value x-
point operator on the types also because we have
already observed that rigid functionals are characterized as the algebra
morphisms in this model, this part is truly routine.
7. Recursion from Iteration Revisited
7.1. Iteration in the category of continuations
Let C be a response category with a response object R. An iterator
on the category of continuations R C is a family of functions (
R C
Spelling out this denition in C , to give an iterator on R C is to give a
family of functions ( (R A ; R A
holds for f 2 C (R A ; R A ). Thus an iterator on R C (hence in the cbv -
calculus) is no other than a xpoint operator on C (hence the target
call-by-name calculus) restricted on objects of the form R A (\negative
objects").
Example 3. We give a simple-minded model of the cbv -calculus
with an iterator. Let C be the category of !-cpo's (possibly without
bottom) and continuous maps, and let R be an !-cpo with bottom.
Since C is a cartesian closed category with nite coproducts, it serves
as a response category with the response object R. Moreover there is
a least xpoint operator on the negative objects R A because R A has
a bottom element, thus we have an iterator on R C (which in fact is a
unique uniform iterator in the sense below).
Remark 5. A careful reader may notice that we actually need a parameterized
version of the iterator for interpreting free variables as
well as free names: should be dened as a function from R C (X
A+Y ) to R C (X A; Y ). However, this parameterization, including
26 Hasegawa and Kakutani
that on uniformity discussed below, can be done in the same way as in
the previous section (and is much easier); a uniform iterator uniquely
extends to a parametrically uniform parameterized iterator { we leave
the detail to the interested reader.
7.2. Relation to uniform T-fixpoint operators
For any object A, the negative object R A canonically has a T -algebra
structure
:x A :m (f R A
for the monad
. Thus the consideration on the uniform T -
xpoint operators applies to this setting: if this computational model
has a uniform T-xpoint operator, then we have a xpoint operator on
negative objects, hence we can model an iterator of the cbv -calculus
in the category of continuations.
Conversely, if we have an iterator on R C , then it corresponds to a
xpoint operator on negative objects in C , which of course include
objects of the form
. Therefore we obtain a T-xpoint
operator. It is then natural to expect that (along the consideration
in Section 6.1), if the iterator satises a suitable uniformity condition,
then it bijectively corresponds to a uniform T-xpoint operator. This
uniformity condition on an iterator must be determined again with respect
to algebra morphisms. So we regard h 2 R C
as \strict" when its counterpart in C (R B ; R A ) is an algebra morphism
from (R to (R A ; A ), i.e., h
holds in C . We
say that an iterator ( ) on R C is uniform if f holds for
THEOREM 3. Given a response category C with a response object R,
to give a uniform R R (
-xpoint operator on C is to give a uniform
iterator on R C .
Proof. Immediate, since a uniform R R (
-xpoint operator uniquely
extends to a uniform xpoint operator on negative objects (hence a
uniform iterator) { the uniqueness of the extension follows from the
uniformity (by the same argument as given in Section 6.1). 2
Axioms for Recursion in Call-by-Value 27
Fortunately, the condition to be an algebra morphism is naturally
represented in a cbv -theory. A value h : A!B represents an algebra
morphism if and only if
holds { in fact, the CPS transformation (see Appendix B) of this equation
is no other than the equation h
. By Lemma 1, in
a cbv -theory, this requirement is equivalent to saying that hx is a
central term for each value x (this also implies Proposition 6 in Section
5), hence h is total. Therefore we obtain the uniformity condition for
an iterator in Section 4. This is remarkable, as it says that the idea of
dening the uniformity principle of xpoint operators with respect to
algebra morphisms (from ADT) and the idea of dening the uniformity
principle of iterators with respect to eect-free morphisms (from Filin-
ski's work) coincide in the presence of rst-class continuations, despite
their very dierent origins; technically, this is the substance of the left-
to-right implication of Proposition 6. In summary, we have semantically
shown Theorem 1:
THEOREM 4 (Theorem 1 restated). In a cbv -theory, there is a bijective
correspondence between the stable uniform cbv xpoint operators
and the uniform iterators.
In a sense, the syntactic proof in Section 4 gives an example of direct
style reasoning, whereas this semantic proof provides a continuation-passing
style reasoning on the same result. We can choose either stable
uniform xpoint operators (in syntactic, direct style) or uniform T -
xpoint operators (in semantic, monadic or continuation-passing style)
as the tool for reasoning about recursion in call-by-value setting; they
are as good as the other (thanks to Theorem 2).
7.3. On Filinski's uniformity
In [5] Filinski introduced uniformity principles for both cbv xpoint
operators and iterators, for establishing a bijective correspondence between
them. While his denitions turn out to be sucient for his
purpose, in retrospect they seem to be somewhat ad hoc and are strictly
weaker than our uniformity principles. Here we give a brief comparison.
First, Filinski calls a value is a value for
each value v : . However, while a value is always central, the converse
is not true. Note that, while the notion of centre is uniquely determined
for each cbv -theory (and category of continuations), the notion
of value is not canonically determined (a category of continuations
28 Hasegawa and Kakutani
can arise from dierent response categories [25]). Since the uniformity
principle is determined not in terms of the base category C but in terms
of the category of algebras, it seems natural that it corresponds to the
notion of centre which is determined not by C but by C T .
Second, Filinski calls a value H
there are total such that
holds (cf. Example 2). It is easily checked that if H is rigid in the sense
of Filinski, it is also rigid in our sense { but the converse does not
hold, even if we change the notion of total values to ours (for instance,
switch ; in Section 4 is not rigid in the sense of Filinski). By closely
inspecting the correspondence of rigid functionals and total functions
via the step/pets and switch constructions, we can strengthen Filinski's
formulation to match ours:
PROPOSITION 8. In a cbv
if and only if there are total
such that
holds.
Proof. By pre- and post-composing switch and switch 1 , rigid functionals
of are in bijective correspondence with
those of :( are, by Lemma 8, in bijective
correspondence with the total functions of
the step/pets construction. A total function of (
is equal to hy; ki:hh 2 hy; ki; h 1
total functions We note
that total functions of are in bijective correspondence
with those of
we take hy; ki:x:k (g y
In summary, for any rigid functional H
have total such that
holds. By simplifying the right hand side of this equation, we obtain
the result. 2
This subsumes Filinski's rigid functionals as special cases where h 2 does
not use the second argument.
Axioms for Recursion in Call-by-Value 29
8. Conclusion and Further Work
We have proposed an axiomatization of xpoint operators in typed
call-by-value programming languages, and have shown that it can be
justied in two dierent ways: as a sound and complete axiomatization
for uniform T-xpoint operators of Simpson and Plotkin [27], and also
by Filinski's bijective correspondence between recursion and iteration
in the presence of rst-class continuations [5]. We also have shown that
these results are closely related, by inspecting the semantic structure
behind Filinski's construction, which turns out to be a special case of
the uniform T-xpoint operators.
We think that our axioms are reasonably simple, and we expect they
can be a practical tool for direct-style reasoning about call-by-value
programs involving recursion, just in the same way as the equational
theory of the computational lambda calculus is the theoretical basis of
the theory of A-normal forms [23, 8].
8.1. Further principles for call-by-value recursion
It is an interesting challenge to strengthen the axioms in some systematic
ways. Below we give some results and perspectives.
Dinaturality, diagonal property, and Iteration Theories
By adding other natural axioms on an iterator in the presence of rst-
class continuations, one may derive the corresponding axioms on the
cbv xpoint operator. In particular, we note that the dinaturality
loop (g
on an iterator loop precisely amounts to the axiom
on the corresponding cbv xpoint operator x v (note that this axiom
implies both the cbv xpoint axiom and the stability axiom). Similarly,
the diagonal property on the iterator
loop (x::[; ](f
corresponds to that on the xpoint operator
These can be seen axiomatizing the call-by-value counterpart of Conway
theories [1, 12]. In [27], Simpson and Plotkin have shown that the
Hasegawa and Kakutani
equational theory induced by a uniform Conway operator (provided it
is consistent) is the smallest iteration theory of Bloom and
Esik [1],
which enjoys very general completeness theorem. Regarding this fact,
we conjecture that our axioms for stable uniform cbv xpoint operators
together with the dinaturality and diagonal property capture all the
valid identities on the cbv xpoint operators, at least in the presence
of rst-class continuations.
Mutual recursion and extensions to product types
One may further consider the call-by-value version of the Bekic property
(another equivalent axiomatization of dinatural and diagonal properties
[12]) along this line, which could be used for reasoning about mutual
recursion. For this purpose it is natural to extend the denition of
xpoint operators on product types of function types, and also extend
the notion of rigid functionals to those with multiple parameters. These
extensions are syntactically straightforward and semantically natural
(as the category of algebras is closed under nite products). Spelling
this out, for , we can (uniquely) extend
the xpoint operator on 0 by (using the idea of Section
property is
stated as, for
. For example, from
Bekic property and uniformity, we can show equations like x v
Fixpoint objects
Another promising direction is the approach based on xpoint objects
[2], as a uniform T-xpoint operator is canonically derived from a x-
point object whose universal property implies strong proof principles.
For instance, in Example 3, a uniform iterator is unique because the
monad R R (
has a xpoint object. For the setting with rst-class con-
tinuations, it might be fruitful to study the implications of the existence
of a xpoint object of continuation monads.
Graphical axioms
Jerey [14] argues the possibility of partial traces as a foundation of
graphical reasoning on recursion in call-by-value languages. Schweimeier
and Jerey [24] demonstrate that such graphical axioms can be used to
Axioms for Recursion in Call-by-Value 31
verify the closure conversion phase of a compiler. Similar consideration
is found in Fuhrmann's thesis [10]. It follows that most of the equalities
proposed in these approaches can be derived (up to some syntactic
dierences) from the axioms for stable uniform call-by-value xpoint
operators with dinatural and diagonal (or Bekic) property; detailed
comparisons, however, are left as a future work.
In a related but dierent direction, Erkok and Launchbury [3] propose
graphical axioms for reasoning about recursion with monadic
eects in lazy functional programming languages. Friedman and Sabry
[9] also discuss about recursion in such settings (\unfolding recursion"
versus \updating recursion") and propose an implementation of \up-
dating recursion" via a monadic eect. Although these approaches have
the common underlying semantic structure as the present work, the
problems cosidered are rather of dierent nature and it is not clear
how they can be compared with our work.
8.2. Relating recursion in call-by-name and call-by-value
The results reported here can be nicely combined with Filinski's duality
[4] between call-by-value and call-by-name languages with rst-class
control primitives. In his MSc thesis [15], the second author demonstrates
that recursion in the call-by-name -calculus [20] exactly corresponds
to iteration in the call-by-value -calculus via this duality,
by extending Selinger's work [25]. Together with the results in this
paper, we obtain a bijective correspondence between call-by-name recursion
and call-by-value recursion (both subject to suitable uniformity
principles)
Recursion in
cbn -calculus , Iteration in
cbv -calculus , Recursion in
cbv -calculus
which seems to open a way to relate the reasoning principles on recursive
computations under these two calling strategies.
Acknowledgements
We thank Shin-ya Katsumata and Carsten Fuhrmann for helpful discussions
and their interests on this work, and the anonymous reviewers
of FoSSaCS 2001 and this submission for numerous insightful sugges-
tions. Part of this work was done while the rst author was visiting
Laboratory for Foundations of Computer Science, University of
Edinburgh.
Hasegawa and Kakutani
--R
Iteration Theories.
New foundations for
Declarative continuations: an investigation of duality in programming language semantics.
Recursion from iteration.
Controlling
Axiomatic Domain Theory in Categories of Partical Maps.
The essence of compiling with continuations.
Recursion is a computational e
Models of Sharing Graphs: A Categorical Semantics of let and letrec.
Sound and complete axiomatisations of call-by-value control operators
Duality between Call-by-Name Recursion and Call-by- Value Iteration
Introduction to Higher Order Categorical Logic.
Computational lambda-calculus and monads
Notions of computation and monads.
Premonoidal categories and notions of computation.
Reasoning about programs in continuation-passing style
Control categories and duality: on the categorical semantics of the lambda-mu calculus
Recursive types in Kleisli categories.
Complete axioms for categorical
Categorical Structure of Continuation Passing Style.
Using a continuation twice and its implications for the expressive power of call/cc.
--TR
--CTR
Yoshihiko Kakutani , Masahito Hasegawa, Parameterizations and Fixed-Point Operators on Control Categories, Fundamenta Informaticae, v.65 n.1-2, p.153-172, January 2005
Atsushi Ohori , Isao Sasano, Lightweight fusion by fixed point promotion, ACM SIGPLAN Notices, v.42 n.1, January 2007
Carsten Fhrmann , Hayo Thielecke, On the call-by-value CPS transform and its semantics, Information and Computation, v.188 n.2, p.241-283, 29 January 2004
Martin Hyland , Paul Blain Levy , Gordon Plotkin , John Power, Combining algebraic effects with continuations, Theoretical Computer Science, v.375 n.1-3, p.20-40, May, 2007 | continuations;iteration;categorical semantics;recursion;call-by-value |
609316 | Challenges and Solutions to Adaptive Computing and Seamless Mobility over Heterogeneous Wireless Networks. | Recent years have witnessed the rapid evolution of commercially available mobile computing environments. This has given rise to the presence of several viable, but non-interoperable wireless networking technologies each targeting a niche mobility environment and providing a distinct quality of service. The lack of a uniform set of standards, the heterogeneity in the quality of service, and the diversity in the networking approaches makes it difficult for a mobile computing environment to provide seamless mobility across different wireless networks. Besides, inter-network mobility will typically be accompanied by a change in the quality of service. The application and the environment need to collaboratively adapt their communication and data management strategies in order to gracefully react to the dynamic operating conditions.This paper presents the important challenges in building a mobile computing environment which provides seamless mobility and adaptive computing over commercially available wireless networks. It suggests possible solutions to the challenges, and describes an ongoing research effort to build such a mobile computing environment. | Introduction
Recent years have witnessed explosive growth in the field of mobile computing, resulting in the
development of mobile computing environments which can provide a very respectable level of
computing and communications capabilities to a mobile user. Unfortunately, one consequence
of this rapid evolution has been the emergence of several viable, but non-interoperable wireless
networking technologies, each targeting a specific operation environment and providing
a distinct quality of service (QoS). The lack of a single set of standards, the heterogeneity
in QoS, and the diversity in networking approaches makes it difficult for a mobile computing
environment to provide seamless mobility across different wireless networks.
Since inter-network migration may result in significant changes in bandwidth or other QoS
parameters, adaptive computing is necessary in a mobile computing environment which provides
seamless mobility over multiple wireless networks. A graceful reaction to sudden QoS changes
can only happen when both the environment and the applications collaboratively adapt to the
dynamic operating conditions. Seamless mobility and adaptive computing are thus related and
complementary goals.
The ideal scenario in mobile computing is the following: a mobile user carries a portable
computer with one or more built-in wireless network interfaces (ranging from O(Kbps) CDPD
WMAN to O(Mbps) 2.4 GHz WLAN), and moves around both indoors and outdoors while
executing applications in a uniform working environment (oblivious of the dynamics of the
underlying inter-network migration). In order to achieve this scenario, the mobile computing
environment needs to provide at least the following functionality:
ffl seamless mobility across different wireless networks.
ffl graceful adaptation to dynamic QoS variations.
ffl a uniform framework for executing applications across diverse underlying environments.
ffl backbone support for mobile applications.
ffl seamless recovery in the presence of failures.
In summary, the goal of a mobile computing environment is to provide the illusion of a uniform
working environment (with possibly dynamically varying QoS) on top of underlying heterogeneous
technologies.
Inherent limitations of wireless and imposed limitations of the state-of-the-art technology
currently restrict the realizability of the provisions above (example of the former: the bandwidth
of a wireless MAN is one to two orders of magnitude lower than a wireless LAN; example of the
latter: switching between RAM and CDPD may require the portable computer to reboot). The
focus of this paper is to distinguish the inherent scientific limitations from the state-of-the-art
limitations, and identify the challenges which need to be solved in order to approximate the
ideal scenario.
In contemporary research, there is no working implementation of a mobile computing environment
which provides seamless mobility on top of heterogeneous commercial networks.
Several critical research issues in both seamless mobility and adaptive computing have not
been addressed, or even identified. In order to understand the challenges in this field, and
also experiment with some preliminary solutions, we started building the PRAYER 1 mobile
computing environment, which provides a platform for seamless mobility and adaptive computing
on top of diverse commercially available wireless networks. We achieve seamless mobility
by providing the 'software glue' to hold together diverse commercial networks, and build an
adaptive computing framework on top of the seamless network. Three important components
of PRAYER relevant to this paper are connection management, adaptive file system support,
and OS/language support for application-level adaptation. The connection manager enables
seamless mobility over diverse networks and provides coarse estimates of available network
QoS. The file system supports application-directed adaptation for partially connected opera-
tion. PRAYER provides simple OS/language support to structure applications in an adaptive
environment. Preliminary results indicate that PRAYER offers an effective environment for
adaptive computing and seamless mobility over commercial networks.
Our system is named PRAYER after the most popular type of wireless uplink transmissions to an infinite
server.
server
application
home
computer
virtual point-to-point link
computer
portable
wired backbone
wireless
network1
wireless network2
Figure
1: Mobile Computing Model
The rest of the paper is organized as follows. Section 2 describes the target mobile computing
model. Section 3 identifies the research goals. Section 4 explores the research challenges
and evaluates different approaches. Section 5 describes the PRAYER mobile computing envi-
ronment. Section 6 summarizes related work, and Section 7 concludes the paper.
Mobile Computing Model
Our mobile computing model is fairly standard, as shown in Figure 1. Five components are of
relevance: (a) portable computer, equipped with multiple wireless network interfaces (b) home
computer, the repository of data and computing resources for a mobile user on the backbone
network (c) application servers, e.g. database server, file server etc. (d) wireless networks,
typically autonomously owned and managed, and (e) backbone network.
In our environment, the following points are distinctive.
1. A wireless network may be autonomously owned and operated. It may use proprietary
network protocols. Its base stations will not be accessible for software changes to support
seamless mobility. Therefore, it is important to operate on top of off-the-shelf commercial
networks.
2. The wireless network may vary in its offered quality of service (QoS) (Figure 2) and can be
parametrized by the following: bandwidth, latency, channel error, range, access protocol,
connection failure probability, connection cost, pricing structure and security.
3. The portable computer may vary in sophistication from a PDA to a high-end notebook,
and can be parametrized by the following: compute power, memory, available network
interfaces, disk space, available compression techniques, battery power and native OS.
4. The dynamics of the underlying environment are primarily influenced by user mobility.
A portable computer will see a variable QoS networking environment depending on its
current connections (disconnection is the extreme case).
In the PRAYER model, the home computer plays an important role. It runs the application
stubs and file system clients on behalf of the portable. Essentially, it performs the role of a
client with respect to the application servers, and the server with respect to the portable. In this
Technology Range Bandwidth Access
Latency Cost
setup
country 10ms TDMA
2.4 GHz radio
Ethernet
500ft 1Mbps 5ms CSMA/CA
1.5Mbps
10Mbps
km 100us CSMA/CD setup
$1000/mth
(AT&T)
CDPD 500ms AMPS
CSMA/CA between $80/MB
(Motorola)
Cellular mdm MAN
MAN
50ms TDMA 40c/min
Radio mdm MAN 100ms CSMA/CA $100/MB
$10000/mth
FDMA
100ms
100Kbps
country
Satellite
30ft 1Mbps 1ms setup
point
point-to
Infrared
10Kbps
10Kbps
10Kbps
Figure
2: Comparison Chart of Wired/Wireless Networking Technologies (numbers represent
model, the goal of seamless mobility is to establish a 'virtual point-to-point' connection between
the portable computer and the home computer over multiple wireless networks. Likewise, the
adaptive computing solutions focus on the consistency management issues between the home
and the portable. Though restrictive, this model simplifies the systems architecture while still
allowing us to study the issues in seamless mobility and adaptive computing.
3 Research Goals
The fundamental goal is to provide a mobile computing environment which enables a portable
computer with multiple wireless networking interfaces to seamlessly move between the different
proprietary networks without disturbing the application, and to provide systems support for
applications to adapt to dynamic QoS variations gracefully. The technical goals fall into two
classes: service goals - what services need to be provided to the applications, and system goals
what systems issues need to be solved to provide the services.
3.1 Service Goals
Three major types of activity from the portable computer include computation, communication,
and information access. The mobile computing environment should provide services for each of
the above within a uniform framework independent of the dynamics of the underlying system.
Computation: Delegating compute intensive and non-interactive communication-intensive
tasks to a backbone computer saves critical wireless bandwidth, and also portable compute
cycles. Remote agent invocation at the backbone is a general and flexible mechanism to provide
backbone computation services [19].
Communication: Efficient, medium-transparent, secure communication is a primary service
requirement. Since a portable computer may have multiple wireless connections, the efficient
choice for communication depends on the application and network characteristics (e.g. CDPD
for short bursts of data, cellular modem for periodic data traffic). Medium-transparent communication
requires seamless migration [4] (with possible notification of QoS change to appli-
cations) between autonomously managed and potentially non-interoperable networks.
Information access: Filtering information destined for the portable at the backbone network
saves wireless bandwidth. This is traditionally achieved by application-level filtering [40]. An
alternative approach, which we pursue, is to have applications use the file system as a filtering
mechanism by imposing different semantics-driven consistency policies.
Uniform framework: User mobility, migration between wireless networks, and network fail-
ures/partitions cause the underlying environment to change dynamically. The mobile computing
environment should provide a uniform framework for the abstraction of the current
capabilities of the environment, and a simple mechanism for applications to be notified upon
change in the environment.
3.2 System Goals
This section identifies the systems goals needed to support the services listed in Section 3.1
within the constraints imposed by the mobile computing environment in Section 2.
Seamless mobility: Medium-transparent data transport in the presence of network connec-
tions/disconnections requires seamless migration between the cells of a wireless network (com-
mon case), and between autonomously managed wireless networks with possibly very different
QoS (general case).
Multiple connections: A portable computer may have simultaneous access to multiple wireless
networks. Depending on the type of application data traffic, required QoS, and priority of
transmission, the mobile computing environment should be able to make an appropriate choice
of wireless network for data transmission. The portable should thus be provided with the ability
to use different wireless networks for different types of application traffic.
Caching and consistency mechanism: Since the wireless medium is a scarce resource,
caching data at the portable may significantly reduce access time and communication traffic,
thereby improving performance and reducing cost. However, caching also introduces data
consistency issues. While contemporary research has typically concentrated on caching and
consistency issues for disconnected operation, partial connection (with varying degrees of QoS)
will soon become a common mode of operation. Caching and consistency policies should adapt
with the network QoS. We argue for application-directed consistency policies and simple APIs
for applications to interact with the mobile computing environment in order to enforce these
policies.
Seamless recovery: When network connections fail, it may be possible to use alternative
redundant network connections in order to recover from primary link failure. Such recovery
should be seamless, but may still change the caching/consistency policies due to a change in
QoS. Previously cached data needs to be reintegrated according to the revised consistency poli-
cies. Recovery of essential state and the mechanism for migration between different consistency
policies should be transparent to the application.
Backbone support: Providing agent invocation and application-directed caching/consistency
policies requires backbone support. We suggest the use of a dedicated home computer in order
to provide the backbone support for executing agents, application stubs, and file system clients.
Although this approach may induce greater overhead, it reduces the consistency management
problem to just keeping the portable consistent with its home, and permits supporting different
consistency policies on the backbone.
Application support: In order to effectively adapt to QoS changes in the network, applications
need to make 'aware' decisions [31, 39]. The trade-off between providing applications
the flexibility to adapt based on context-awareness, and introducing complexity in the application
logic in order to handle such adaptation, is a delicate one. Ideally, the applications will
control the policies for consistency management of their data, while the system will provide
the mechanisms to realize these policies. The split between policies and mechanisms will enable
the applications to adapt to the dynamic QoS while not being concerned with the actual
mechanisms of imposing consistency.
Research Challenges and Solutions
The mobile computing environment consists of five entities: portable computers, home com-
puters, application servers, wireless networks, and the backbone wired network. The wireless
network is typically proprietary, and the base station is thus inaccessible for customization to
support seamless mobility. The service goals in Section 3.1 all require some form of backbone
support. The home computer is the natural entity to provide this support, since we cannot
provide systems software support on the base stations or mobile service stations 2 . The system
architecture at a very high level is thus fairly obvious: the computing entities are the portable
computer (one per user), home (one per user) and application servers (one per application) 3 .
The home and the portable computer are connected by a dynamically changing set of network
connections with diverse QoS - each with a wireless and wired component. The home and application
servers are connected via the backbone network, and application servers are oblivious
of user mobility. The home interacts with the application server on behalf of the portable. The
home and the portable together provide the functionality required by the goals in Section 3.
Within the above framework, a number of research challenges need to be addressed. We
classify them into three broad areas:
ffl application structure
ffl seamless mobility and management of multiple connections
2 Henceforth, the term base station generically refers to the backbone agent which supports mobility in a
network.
3 In practice, an application server may serve several applications, and a home computer may be a dedicated
computer from which mobile users can lease home service.
ffl adaptive computing and consistency management
4.1 Application Structure
Applications need to adapt to the dynamic network conditions because the mobile computing
environment has no knowledge of the application semantics. While the environment can
perform some application-independent adaptation (such as compression, batching writes, etc.),
semantics-dependent adaptation - such as deciding which part of the data is critical and needs
to be kept consistent over a low QoS network - needs to be performed by the application. The
application structure thus depends on the type of adaptation support provided by the environment
to applications. In particular, two issues are of interest: (a) whether the environment
notifies the applications of QoS changes, and (b) whether the environment provides support for
notification handling and dynamic QoS negotiation.
4.1.1 Transparency versus Notification
The major source of problems in our mobile computing environment is the dynamics of the
network connections. Due to user mobility, connections may be set-up/torn-down, typically
accompanied with QoS changes. Even within the same network, mobility may cause a user
to move from uncongested cell to a congested cell. The issue is whether, and how, to provide
a framework for applications to react to the dynamic network QoS. There are three broad
approaches to this problem:
1. Applications are provided with a seamless mobile computing environment without notifications
of QoS change. This is consistent with a purely networking solution to seamless
cross-network migration, and does not work well when QoS changes by orders of magnitude
(e.g. indoor to outdoor mobility).
2. Applications are allowed to dynamically (re)negotiate QoS with the network. If the pre-negotiated
QoS is violated by the network, the application is notified. The onus of QoS
negotiation and reaction to notifications is on the applications. Several emerging adaptive
computing solutions fit into this approach[31, 39].
3. Applications specify a sequence of acceptable QoS classes, the procedure to execute if
a QoS class is granted, and the exception handling policy to execute if the network
violates its QoS contract. The mobile computing environment handles the dynamic QoS
(re)negotiation and reaction to notification. If the network notifies the application of a
failure to deliver a pre-negotiated QoS class, the system executes the exception procedure
and then renegotiates a lower acceptable QoS class from the list. PRAYER follows this
approach, as described in Section 5.
4.1.2 Adaptation Support
The application structure will depend on the level of adaptation support provided by the mobile
computing environment. In case the QoS changes are transparent to the application, no special
support needs to be provided for application-level adaptation. However, 'aware' applications
will adapt better to the dynamics of the environment [39]. Support for QoS-awareness in
mobile computing environments is a very complex task. While there are systems which provide
application-level reaction to measured coarse-grain QoS, we are not aware of a working system
which supports dynamic QoS (re)negotiation by the application. Even application-level support
for reaction to QoS-notification (upon migration of networks, for example) is a challenging task.
There are three broad approaches:
1. The mobile computing environment provides the measured QoS in a structure which can
be retrieved by applications. Applications essentially poll the environment periodically
in order to retrieve current QoS value and then adapt accordingly.
2. The mobile computing environment provides the measured QoS in a structure which can
be retrieved by applications. In addition, an application registers with the environment
and specifies acceptable QoS bounds. If the QoS bounds are violated, the application is
notified, and may then adapt accordingly [31].
3. The application program is split into ranges separated by QoS system calls. In each call,
the application specifies a sequence of acceptable QoS classes, the procedure to execute if
each class is satisfied, and the exception handling if the measured QoS class changes. In
this case, notification is handled by the exception handling procedure, which may pursue
one of four actions: block, abort, rollback, or continue. Section 5 describes this approach
further.
4.2 Seamless Mobility and Management of Multiple Connections
At any time, a portable computer may have access to multiple wireless networks. The appropriate
choice for the wireless network for data transport depends on the application traffic
characteristics, wireless network characteristics, priority of transmission, and cost (making this
trade-off is in itself a challenging task). Migration between networks may be induced by a
variety of reasons, such as user mobility, network failure/partition, or application-dependent
trade-offs. Seamless mobility requires that when connections are broken or established, the
process of switching between networks should be done transparently (with possible QoS notifi-
cations).
An important distinction between the considerations of seamless mobility in this paper
and related work [4, 26] is that we address the issue of providing seamless mobility over autonomously
owned and operated, possibly non-interoperable wireless networks. This imposes
the constraint of not being able to access or modify base station software.
There are four levels of mobility, as described below.
1. Handoff within the organization and the network: a mobile user moves between two cells
of the same service-provider within a network (e.g. handoff between two RAM cells).
2. Handoff between organizations but within the same network: a mobile user switches
between service-providers on the same network (e.g. switches carriers while using the
same cellular phone line).
TCP/IP
handoff Backbone
Backbone
TCP/IP
agent
MH FH
network2
network1
agent
mobile
IP'
TCP/IP
drivers
Backbone
agent
agent
connection manager
transport
transport
connection manager
MH
home
network2
network1
(c)
(a)
(b)
Figure
3: Network Support for Seamless Mobility
3. Migration within the organization but between networks: a mobile user switches between
networks of the same service-provider (e.g. switches between cellular phone and CDPD
of same carrier).
4. Migration between organizations and networks: a mobile user switches both networks and
service-provider (e.g. switches between locally owned WLAN to RAM).
Handoff between the cells of the same network and organization is the common case, and has
been solved at the network layer [25]. Handoff between cells of different organizations introduces
the problem of authentication and accounting [9]. Handoff across the cells of different networks
and organizations is the general case, and is addressed in this paper. In the general case, the
following constraints apply: (a) a mobile user may migrate between different networks which
are owned and operated autonomously, (b) the networks may use different protocol stacks
(typically the lower 3 layers of the stack), (c) mobile service stations for the different networks
will not communicate with each other directly for mobility support, and (d) each network will
require a unique address for the computer.
Several important issues arise in this case: (a) how to structure the network, (b) when to
migrate between networks and (c) how to make application-related trade-offs. Authentication,
accounting and security in inter-network mobility are other major issues, but are not discussed
in this paper. A discussion of these issues, and a preliminary solution are proposed in [9].
4.2.1 Network Structure
Figure
3 shows alternative network structures for handling mobility.
Figure
3.a shows the standard Mobile-IP structure which supports handoffs between two
cells of the same network. Agents provide backbone support for mobility. Performance enhancements
may include multicasting between the old and new base station in order to reduce
handoff latency [10], and snoop caches in base stations in order to eliminate transport layer
retransmissions [8]. This network structure uses a single network address and only considers
handoff between cells of the same network.
Figure
3.b shows a network structure which supports inter-network mobility at the network
layer by enhancements through the Mobile-IP protocol. It uses different drivers for the different
networks, and provides a network level solution to the mobility problem [5]. It is still possible
to provide some notion of QoS, by having the network layer measure the QoS over each net-work
and then propagate it to higher layers. Likewise, application-level trade-offs and choice
of the wireless network can be achieved by informing the higher layers of the wireless networks
currently accessible to the portable computer. We are not, however, aware of any implementation
which provides adaptation support over a network level solution for seamless mobility
across different wireless networks. While network level solutions may be the eventual goal for
seamless mobility, they do involve making changes at the base station and cooperation between
the different networks. In the current scenario, providing seamless mobility over autonomously
owned and operated wireless networks is not possible using this approach.
Figure
3.c shows a network structure which handles inter-network mobility at the transport
layer and connection manager in the context of the PRAYER model. The connection manager
maintains individual connections for each network, and can make the choice of network for
data transport. The connection manager can also measure the end-to-end QoS parameters
for each network between the portable and the home. The advantage of this architecture is
that the choice of the wireless network for data transport is made outside the network. Thus
seamless mobility is provided without the necessity of software support at the base stations.
There are two advantages in not requiring base stations to different networks to co-operate
in order to provide seamless mobility the networks: (a) many autonomous networks follow
their own protocol standards, and (b) authentication and trust between the base stations of
different organizations is avoided. The disadvantage is performance degradation. PRAYER
uses a similar solution, but it merges the transport layer and connection manager.
4.2.2 Choice of Wireless Network for Data Transport
networking resources being scarce and expensive, the choice of wireless network for
data transport can significantly affect the performance and cost of an application. Typical
network related trade-offs include bandwidth, delay, medium access patterns, security, and
pricing structure. Primarily due to bandwidth considerations, the choice of networks is typically
wire, indoor wireless and outdoor wireless in descending order of preference. The interesting
trade-offs happen in the outdoor wireless networks, where access patterns and pricing structure
play important roles. For example, CDPD and RAM support packetized data transport and are
susceptible to much larger bursts of throughput, delay and jitter than cellular modems, which
have periodic medium access patterns. If charged by data size, short packet bursts can cost
up to an order of magnitude more in cellular modems as compared to RAM, while large data
transmissions can cost an order of magnitude more in RAM than cellular modems. However,
several packet data wireless providers also support monthly rates, in which case the pricing
trade-off is irrelevant. Security is another important issue. For example, cellular modems are
insecure while CDPD offers 6 levels of security.
We are not aware of any good solution to the problem of selecting a medium for transport
based on an application-directed tradeoff. In PRAYER, the current approach is to pre-specify a
descending order of network choices based on raw data rates, and try to satisfy the application
connection request through the 'best' available network. Clearly, this is inadequate, because
the dynamically available data rate may be significantly lower than the raw data rate. Besides,
bandwidth is an important, but by no means only criterion for network selection.
4.3 Adaptive Computing and Consistency Management
mobility across different wireless networks is typically accompanied with a change in
QoS. For example, mobility from indoor to outdoor wireless networks may result in a bandwidth
decrease by two orders of magnitude. In order to provide a graceful degradation of the operating
environment, mechanisms for system level and application level adaptation are necessary.
This section explores the issues in caching and consistency management for adaptive mobile
computing environments.
Caching data and meta-data at the portable computer reduces access time and offered wireless
traffic, but introduces a related problem of consistency when multiple copies of shared data
are maintained. In mobile computing environments, disconnection is always a possibility. Thus
most approaches to caching 'hoard' data aggressively, and allow the mobile user to manipulate
the cached copy at the portable when disconnected. The modified data is reintegrated
with the server copy upon reconnection, and update conflicts are typically reconciled by human
intervention in the worst case [27]. This approach is suitable for disconnected operation on
mostly private data. Given the increasing availability of wide area wireless connectivity and
also the potential for collaborative applications in mobile computing, there arise four scenarios
for caching and consistency management of data.
1. Disconnected operation on private data
2. Disconnected operation on shared data
3. Partially connected operation on private data
4. Partially connected operation on shared data
In the above classification, indoor and outdoor wireless network connectivity are termed as
'partially connected' because (a) disconnections are possible at any time, (b) network errors
are orders of magnitude higher than on wire, and (c) the significant cost associated with data
transmission over the scarce resource may induce voluntary intermittent connectivity on the
part of the mobile user.
Distributed file systems which support disconnected operation typically assume that most
of the data is private and unshared. The general approach in this case is to hoard data while
in connected mode [27, 42]. Just prior to voluntary disconnection by the user, explicit user-directed
hoarding is allowed. Once disconnected, the user is allowed to access and update the
hoarded local copy, and all update operations are logged. Upon reconnection, the hoarded files
are checked into the server. Update conflicts are resolved by log replay and in the worst case,
user intervention [27, 34].
While file systems assume that most of the files are private (user data) or shared read-only
(program binaries), this is not true of other data repositories such as databases. In such
cases, update conflicts upon reconnection cannot be assumed to happen rarely, and automatic
mechanisms for update conflict resolution need to be provided [15]. The possibility of conflict
resolution voiding a previously concluded transaction also gives rise to the notion of provisional
and committed transactions. Thus distributed databases which support disconnected operation
must also support multiple levels of reads and writes.
Related work has often assumed the two extreme modes of operation - connection or dis-
connection. The emergence of wide area wireless networks provides intermediate modes of
connection - where communication is expensive, but possible. Partial connection is particularly
useful in two situations in the context of data management: (a) when files which were not
hoarded in the connected mode are required, and (b) when certain parts of the application data
need to be kept consistent with the data on the backbone server. In an environment which
supports seamless mobility over heterogeneous wireless networks, the QoS may change dynam-
ically. Thus the caching and consistency policies need to adapt to change in the network QoS
in order to efficiently exploit the benefits of partial connectivity without incurring a significant
communication cost.
In our model, the home retains the 'true' copy of the cached data at any time. Since the home
is in a connected mode on the backbone, it can execute any application dependent consistency
policy on the backbone. Essentially, the onus of keeping its cached data consistent with the
home is on the portable computer. This approach is at variance with contemporary schemes
such as Coda [27] or Bayou [15], which do not have an intermediate home computer. The
advantage of having a home that is known to maintain the true version of the data is twofold:
(a) the consistency management in the mobile computing environment is now restricted to
two known endpoints connected by a variable QoS network, and (b) distributed applications
which support different types of consistency policies on the backbone can be supported, since
the applications on the backbone need only care about keeping their data consistent with the
home computer. The disadvantage of this model is poor availability - as far as the portable is
concerned, the home is the only server for its cached data.
Within this framework, a number of caching and consistency issues arise: (a) what data
should be hoarded, (b) how consistency will be maintained, and (c) how applications will
interact with the mobile computing environment in order to adapt the consistency management
policies upon dynamic QoS changes. These issues are discussed below in the context of file
system support for partially connected operation.
4.3.1 Hoarding Policy
What to hoard is a non-trivial question in mobile computing. The factors involved are: (a)
the nature of the data - ownership, mutability, and level of consistency, (b) currently available
network QoS, (c) portable computer characteristics - available disk and battery power, and (d)
predicted future connection or disconnection.
Privately owned data or read-only data can be cached without involving any communication
overhead for consistency management. Cached shared read/write data may involve high
communication overhead for consistency management, depending on the type of consistency
guarantees provided on shared data. Data which is loosely consistent or data which is not
modified often can be cached with low overhead. Based on the these observations, a user who
voluntarily plans on initiating a disconnection or migration to a low QoS network (e.g. indoor
to outdoor) may choose to hoard private and read-only files, and flush the dirty cached data for
shared read/write files. Most of the caching decisions mentioned above are highly application
dependent. Ideally, the mobile computing environment will be smart enough to provide some
assistance in predictive caching [28, 42]. For example, a request for caching an application
binary will also cache the files it has frequently accessed during its previous executions (e.g.
resource files) [42].
The current approach in PRAYER is simplistic - caching files which have been accessed in
the recent past, and allowing the user to explicitly select files for caching. The fact that partial
connectedness (as opposed to disconnectedness) is the common mode of operation reduces the
negative impact of such a simple predictive caching approach.
4.3.2 Caching Granularity
The tradeoff in caching granularity is that a large grain size may induce false sharing while a
small grain size may induce higher processing overhead at the portable and the home. There
are three broad alternatives for the grain size of caching:
1. Whole file caching: The whole file is cached upon file open. In most current distributed
file systems which support disconnected operation [20, 23, 27], whole files are cached
during connection.
2. Block caching: Caching is at the granularity of file system blocks. A variation of block-size
caching is to have applications vary the block size, depending on the available network
QoS.
3. Semantic record caching: An application imposes a semantic structure on its files. The
semantic structure is contained in a pre-defined template, which specifies record and field
formats in terms of regular expressions. The application is allowed to specify (by means
of a file system interface) the cache size to be per-record level or per-field level. The
advantage of this scheme is that it allows an 'aware' application to adapt its caching
granularity to both the network QoS and application-semantics. The disadvantage is
the increased complexity in caching/consistency management. The PRAYER caching
approach implements a variant of this approach.
An important point to note in both application-defined block level caching and semantic record
caching is that the caching/consistency is being done here between the home and the portable.
In the absence of a home, different portable clients may have different cache granularities, which
will make providing consistency management incredibly hard for the distributed file system. In
our model, the distributed file system provides whatever consistency policy it may choose to,
with respect to the home. The block and semantic record caching schemes refer to the ways
the portable keeps itself consistent with the home.
4.3.3 Consistency Management
One of the advantages of partial connectedness is the option of providing a variable level of
consistency on a whole file or parts of it. File systems which support disconnected operation
must inevitably provide some form of session semantics, wherein a disconnection period is
treated as a session. Since disconnection is a special case of partially connected operation, the
consistency management for partially connected operation must reduce to session semantics in
the event of disconnection.
PRAYER supports semantic record caching and consistency. An aware application opens
a file and imposes a template structure on it. The open file is thus treated as a sequence of
(possibly multi-level) records. An application can specify certain fields in all records, or certain
records in the file to be kept consistent with the home. For each cached data element, there
are two possible consistency options: reintegrate and invalidate. Reintegration keeps data con-
sistent, but requires communication between the home and the portable in order to propagate
updates. Invalidation tolerates inconsistencies, but requires no communication. Depending
on the available QoS, the application can dynamically choose to reintegrate or invalidate each
record or field. Note, that invalidate still allows the application to access the local copy, but
with no guarantees on consistency.
In connected mode, the whole file is kept in the reintegration mode (between the portable
and the home). In disconnected mode, the whole file is kept in the invalidation mode. In
partially connected mode, the application has the flexibility to keep critical fields or records
consistent while accepting inconsistencies for the rest of the file.
In addition to maintaining consistency on certain parts of the file, there also needs to be
support for explicit consistent reads and writes. PRAYER supports two types of reads: local
read and consistent read. A local read is the default operation, and reads the local copy. A
consistent read checks the consistency between the portable and home, and retrieves the copy
from the home if the two copies are inconsistent. PRAYER supports three types of writes: local
write, deferred write and consistent write. A local write updates only the local copy. A deferred
batches write updates. A consistent write flushes the write to the home. In disconnected
mode, consistent read and write return errors.
Two simple examples illustrate the operation of the application-directed adaptive consistency
management.
ffl Calendar: If a user goes on a trip and maintains a distributed calendar with the secretary,
the user would keep the 'time' and `place' fields of appointments in reintegration mode,
but the 'content' field in invalidation mode. This will enable the user and the secretary
to prevent scheduling conflicts, though the content fields of the appointments may not be
consistent.
ffl Email: If a user goes on a trip, the email application could keep the 'sender' and `subject'
fields in reintegration mode, but the 'content' field in invalidation mode. If the user wants
to read a particular email, an explicit 'consistent read' will be issued in order to access
the contents.
5 The PRAYER Mobile Computing Environment
mobility across diverse indoor and outdoor wireless networks is a very desirable goal,
since it will enable a user to operate in a uniform mobile computing environment anytime,
anywhere. However, lack of inter-operability standards pose a significant challenge in building
such an environment. Even if seamless mobility is provided across different networks, the
wide variation in bandwidth and other QoS parameters imply that the systems software and
application will have to collaboratively adapt to the dynamic operating conditions in order
to gracefully react to inter-network migration. Simple, yet effective mechanisms need to be
provided to applications for adaptive computing.
In order to explore the challenges in building a uniform operating environment which provides
adaptive computing and seamless mobility on top of commercially available wireless net-
works, we are building the PRAYER mobile computing environment. A preliminary PRAYER
prototype has been operational for six months, and has served as a platform to test some of our
solutions. While the focus of this paper has been to identify the major challenges and discuss
possible solutions for providing seamless mobility and adaptive computing, a brief discussion
of the PRAYER environment will serve to provide an overall context for our approach.
There are three important components in PRAYER: connection management, data manage-
ment, and adaptation management. Connection management provides seamless mobility over
multiple wireless networks, and provides the abstraction of a virtual point-to-point connection
between the portable computer and the home computer. Data management provides filtered
information access on top of a file system which implements application-directed caching and
consistency policies. Adaptation management provides language and systems support to applications
for dynamic QoS (re)negotiation and reaction to notifications by the network (though
at this point, we do not perform end-to-end QoS negotiation in the network ).
5.0.4 Seamless Mobility across Multiple Wireless Networks
We adapt TCP/IP in order to provide support for a virtual point-to-point connection over multiple
wireless networks between the portable computer and the home computer. The portable
computer may have different IP addresses, corresponding to the different networks. Each TCP
virtual connection is identified by a 4-tuple consisting of a logical IP address for the portable,
a port at the portable, the IP address of the home, and a port at the home. The logical IP
address for the portable for a TCP connection is the IP address of the portable corresponding
to the wireless network over which the connection is first set up. Thus, the logical IP address,
which serves to identify the connection, can be different from the IP address of the wireless
network over which the portable actually transmits the packets.
All TCP connections to or from the portable pass through the home if seamless mobility
over multiple wireless networks is desired. The home is bypassed if a TCP connection over a
single wireless network is desired. In order to establish a virtual connection between the home
and the portable, the application pre-specifies the sequence of acceptable wireless networks in a
descending order of priority. When a connection request is initiated at the home or the portable,
the networks are polled for access in the descending order of priority. We define a socket interface
to applications for setting up virtual connections over multiple wireless networks. The details
of the implementation and the programming interface are described in [18].
5.0.5 Caching and Consistency Management in the File System
The file system uses the home as the server and the portable as the client, and provides strong
consistency on application-specified portions of files cached at the client (note that the home
itself may be an NFS client). The key feature of the file system is the support for application-
directed caching and consistency policies.
When an application opens a file at the portable, it imposes a template on the open file.
Basically, a template specifies the semantic structure of the file. For example, a mailbox is
a sequence of mail records, where a mail record has some pre-defined fields such as sender,
subject, content, etc. It is possible for a template to specify records with variable length
fields, optional fields, or fields appearing in different orders (all of which occur in the mailbox
template). Once the application imposes a semantic structure for a file, the file system at
the home and the portable create a sequence of objects for the file, each object representing a
record. The application may then specify a subset of the fields of every record, or a subset of the
records of the file to be kept consistent with the home through a pconsistency() system call,
which causes these fields/records to be marked at both the portable and the home. Whenever
a marked data element is updated at either the home or the portable, it is propagated to the
other entity.
In addition to requiring consistency on parts of the file, the application may also perform
explicit consistent reads and writes (through the pread() and pwrite() system calls, which
basically read/write through the home if the data at the portable is not consistent.
The goal of the PRAYER file system is to facilitate adaptive application-directed consistency
policies, while shielding the application from the mechanisms of keeping parts of the file
consistent. When the portable is fully connected, the whole file may be kept consistent with the
home (i.e. reintegration mode). When the portable disconnects, the whole file now operates
in cached-only (invalidation) mode. When the portable has connectivity through wide area
wireless networks and communication is expensive, only the critical parts of the file are kept
consistent, and the rest of the file is accessed in a cached-only mode. Supporting this level of
adaptation has a definite penalty in terms of file system performance, though we are yet to
perform a quantitative evaluation of the overhead. A detailed description of the file system
design and implementation is available in [16].
5.0.6 Application Support for Adaptation
We provide simple language and OS support for modifying existing applications or building
new applications in our adaptive mobile computing environment. We classify QoS requests into
commonly used QoS classes. A program is divided into regions, and may explicitly initiate QoS
re-negotiation between the regions. Within a region, the application expects the network to
provide a fixed QoS class. If the network is unable to do so, it notifies the application, which
causes pre-specified exception handling procedures to be executed (as described below).
Exceptions are handled in one of four ways: best effort, block, abort or rollback. 'Best effort'
ignores the notification and continues with the task. 'Block' suspends the application till the
desired QoS class is available. 'Abort' aborts the rest of the task within the region and moves
to the next region. 'Rollback' aborts the rest of the task, and reinitiates QoS negotiation within
the same region. Ideally, we would like rollback to also undo the actions taken thus far in the
region before re-negotiation.
The QoS negotiation is performed by a system call getQoS(), which takes in a sequence of
options. Each option is a 3-tuple, consisting of the desired QoS class, the procedure to execute
if that class is granted, and the exception handling policy.
At the start of a region, the getQoS() call measures the network QoS, and returns the highest
desired QoS class it could satisfy. When a QoS class is granted, the corresponding procedure
is executed. If during the execution of the procedure, the network is unable to sustain the QoS
class, then the exception handling routine is invoked with the application-defined policy. In
this framework, both QoS negotiation and notification handling are supported by the system;
the application need only specify the policy, and not bother about the mechanism in order to
achieve adaptation.
We expect applications to use the adaptation support and consistency management policy in
concert. We expect that an application will initially open a file and impose a template structure
on it. Then, depending on the QoS class granted through the getQoS() call, the application
can change the fields/records it keeps consistent through the pconsistency() call. While we
have not yet written a large application in this environment, we expect that the mechanisms
for adaptation are simple, yet sufficiently powerful to support adaptive computing.
6 Related Work
Mobile computing has witnessed a rapid evolution in the recent past, both in industry and
academia. However, related work on seamless mobility and adaptive computing support for
applications has just begun to emerge. In this section, we provide an overview of contemporary
work in seamless mobility, adaptive computing, consistency management, and disconnected
operation. We also identify key work in related areas which motivated several of the design
decisions in PRAYER.
Most of the projects which provide seamless mobility across heterogeneous wireless networks
provide a network level solution. While this is a desirable goal, it will require mobile service
stations of different networks to interact and trust each other. MosquitoNet [4, 5] and Barwan
[26] address seamless mobility issues as described above.
data consistency in variable QoS environments. For high-QoS,
caching/consistency is as in backbone networks. In low-QoS, nothing is cached. In variable
QoS, there is a two level caching scheme: on the backbone between homes, and on the wireless
network between the home and the portable. Bayou [15, 43, 44] provides a replicated
weakly-consistent distributed database to support shared data-driven mobile applications and
supports per-application consistency. Bayou deals with small-to-medium database applications
(calendars, etc) since it assumes that a large part of the database may be cached at the mobile.
Odyssey [31, 36] provides a framework for adaptive applications to react to QoS changes. The
applications can specify QoS bounds to the network. If the network is unable to satisfy these
bounds, it notifies the applications, which can then adapt to the dynamic QoS change.
Coda [27] provides disconnected access of file systems. When the portable is in connected
mode, it hoards files by periodic 'hoard walks'. Upon disconnection, the portable accesses the
cached files, and logs the updates. Upon reconnection, it checks the mutated cached files for
potential conflict, which is then resolved by the user. Disconnected AFS [23] preserves the AFS
semantics for disconnected operation. A user explicitly disconnects from the network, upon
which the callbacks are retrieved by the server. Seer [28] and MFS [42] propose sophisticated
hoarding mechanisms for disconnected operation.
PRAYER uses several ideas from current and past related work, which are mentioned below.
I-TCP [6] (intermediate host), Daedelus [8] (snoop cache) and [11] provide approaches for
efficient wireless TCP. NFS [35], Sprite [30] and Andrew [21] provided distributed file systems
caching approaches, which may be used for backbone consistency. Distributed databases establish
consistency among replicated data by clustering [33], tokens [24] and partitioning [14]. [48]
discusses fundamental issues in ubiquitous and mobile computing. [45] highlights OS issues in
mobile computing.
Conclusions
The ideal scenario in mobile computing is when a user equipped with a portable computer with
multiple wireless interfaces roams around between different indoor and outdoor networks while
operating in a seamless computing environment which gracefully adapts to the dynamically
changing quality of service. In order to achieve this scenario, at least two important components
need to be satisfied: seamless mobility, and adaptive computing.
The emergence of several viable wireless networking technologies with different standards,
networking architectures and protocol stacks makes the problem of seamless mobility across
different wireless networks a very challenging task. Contemporary research typically proposes
network level solutions to this problem. Such a solution, though scientifically desirable, poses
some serious problems because the commercial networks cannot inter-operate. This paper
identifies the challenges in providing seamless mobility on top of commercial networks.
mobility across wireless networks is typically accompanied with a change in QoS.
In order to react gracefully to the change in QoS, both the mobile computing environment and
the application need to collaboratively adapt to the dynamic operating conditions. Adaptation
to QoS changes introduces challenges in data management and caching/consistency. It is the
application which can best make semantics-based decisions on adaptation. However, burdening
the application with dynamic QoS negotiation and reaction to network notifications will complicate
application logic, and lead to an unviable environment to develop real-world programs.
This paper identifies the challenges in language and operating systems support for applications
to interact with the underlying file system.
With the increasing popularity for mobile computing, the importance of a uniform mobile
computing environment providing seamless mobility across commercial wireless networks and
graceful adaptation to dynamic operating conditions cannot be over emphasized. However,
there are significant challenges which need to be overcome before such an environment can
be effectively deployed. This paper explores the issues and offers some preliminary solutions,
which are being implemented in the PRAYER mobile computing environment.
Acknowledgements
I am grateful to Dane Dwyer for providing multiple reviews of this paper and for implementing
the PRAYER file system.
--R
Structuring Distributed Algorithms for Mobile Hosts
Changing Communication Environmeents in MosquitoNet
Supporting Mobility in MosquitoNet
I-TCP: Indirect TCP for Mobile Hosts
Handoff and System Support for Indirect TCP/IP
Improving TCP/IP Performance over Wireless Networks
A Protocol for Authentication
The Effects of Mobility on Reliable Transport Protocols
Improving the Performance of Reliable Transport Protocols in Mobile Computing Environments
Experiences with a Wireless Network in MosquitoNet
Network Access for Personal Communications
Consistency in Patitioned Networks
The Bayou Architecture: Support for Data Sharing Among Mobile Users
Mobile File Systems for Partially Connected Operation
The Challenges of Mobile Computing
Mobility across Commercial Wireless Networks
Networks
Primarily Disconnected Operation: Experience with Ficus
Scale and Performance in a Distributed File System
Data Replication for Mobile Computers
Disconnected Operation for AFS
Data Management for Mobile Computing
The Bay Area Research Wireless Access Network(BARWAN)
Disconnected Operating in the Coda File System
The Design of the SEER Predictive Caching System
Large Granularity Cache Coherence for Intermittent Connectivity
Caching in the Sprite Network Filesystem
A Programming Interface for Application-Aware Adaptation in Mobile Computing
An Empirical Study of a Highly Available File System
Maintaining Consistency of Data in Mobile Distributed En- vironments
Resolving File Conflicts in the Ficus File System for a Distributed Workstation Enviroonment
Design and Implementation of the Sun Network Filesystem
Experience with Disconnected Operation in a Mobile Computing Environment
Customizing Mobile Applications
Context Aware Computing Applications
Information Organization using Rufus
Service Interface and Replica Management Algorithm for Mobile File System Clients
Intelligent File Hoarding for Mobile Com- puters
Managing Update Conflicts in Bayou
Session Guarantees for Weakly Consistent Replicated Data
Operating System Issues for PDA's
Effective Wireless Communication through Application Partitioning
Application Design for Wireless Computing
Some Computer Science Issues in Ubitquitous Computing
--TR
--CTR
Dane Dwyer , Vaduvur Bharghavan, A mobility-aware file system for partially connected operation, ACM SIGOPS Operating Systems Review, v.31 n.1, p.24-30, Jan. 1997
S. K. S. Gupta , P. K. Srimani, Adaptive Core Selection and Migration Method for Multicast Routing in Mobile Ad Hoc Networks, IEEE Transactions on Parallel and Distributed Systems, v.14 n.1, p.27-38, January
Ahmad Rahmati , Lin Zhong, Context-for-wireless: context-sensitive energy-efficient wireless data transfer, Proceedings of the 5th international conference on Mobile systems, applications and services, June 11-13, 2007, San Juan, Puerto Rico | seamless mobility;adaptive computing |
609391 | On Multirate DS-CDMA Schemes with Interference Cancellation. | This paper investigates interference cancellation (IC) in direct-sequence code-division multiple access (DS-CDMA) systems that support multiple data rates. Two methods for implementing multiple data rates are considered. One is the use of mixed modulation and the other is the use of multicodes. We introduce and analyze a new approach that combines these multiple data rate systems with IC. The cancellation in the receiver is performed successively on each user, starting with the user received with the highest power. This procedure can in turn be iterated, forming a multistage scheme, with the number of iterations set as a design parameter. Our analysis employs a Gaussian approximation for the distribution of the interference, and it includes both the AWGN and the flat Rayleigh fading channel. The systems are also evaluated via computer simulations. Our analysis and simulations indicate that the IC schemes used in mixed modulation or multicode systems yield a performance close to the single BPSK user bound and, consequently, give a prospect of a considerable improvement in performance compared to systems employing matched filter detectors. | Introduction
An important feature of future mobile communication systems is the ability to handle other
services besides speech, e.g., fax, Hi-Fi audio and transmission of images, services that are not
readily available today. To achieve this, it is essential to have a flexible multiple access method
that maintains both high system capacity and the ability to handle variable data rates. Direct-sequence
code-division multiple access (DS-CDMA) is believed to be a multiple access method
able to fulfill these requirements [1].
There are two main factors that limit the capacity of a multiuser DS-CDMA system and
make the signal detection more difficult. They are signal interference between users, referred to
as multiple access interference (MAI), and possibly large variations in the power of the received
signals from different users, which is known as the near-far effect. The power variations that
causes the near-far effect is due to the difference in distance between the mobile terminals and
the base station as well as fading and shadowing. One way to counteract the near-far effect
is to use stringent power control [2]. Another approach would be to use more sophisticated
receivers which are near-far resistant [3]. Because of the MAI's contribution to the near-far
problem and its limiting of the total system capacity, much attention has been given to the
subject of multiuser detectors that have the prospect of both mitigating the near-far problem
and cancelling the MAI. Research in this area was initiated by Verd'u [4, 5], who related the
multiple access channel to a periodically time varying, single-user, intersymbol interference (ISI)
channel and who derived the optimal multiuser detector. Unfortunately, the complexity of this
detector increases exponentially with the number of users. This has motivated further research
in the area of suboptimal detectors with lower complexity [6-21].
The objective of our work is to propose and evaluate an efficient detector for a multiuser
and multirate DS-CDMA system. The proposed multiuser detectors in [6-18] are all designed
for single rate systems. In [22], however, a dual rate scheme based on multi-processing gains is
considered for the decorrelating detector. In this paper we consider two other multirate schemes
together with a single- and multistage non-decision directed interference canceller (NDDIC)
[19-21]. The IC 1 schemes are generalizations and extensions of the single-stage SIC scheme for
BPSK derived by Patel and Holtzman in [14, 15]. The operation of the IC scheme is as follows.
The receiver is composed of a bank of filters matched to the I and Q spreading sequences of each
user. Initially the users are ranked in decreasing order of their received signal power. Then the
output of the matched filter of the strongest user is used to estimate that user's baseband signal,
which is subsequently cancelled from the composite signal. In other words, the projection of
the received signal in the direction of the spreading sequence of the strongest user is subtracted
from the composite signal. This is how we attempt to cancel the interference that affects the
remaining users. Since we consider the uplink, that is, communication from the mobile terminal
to the base station, we are interested in detection of all received signals and, thus, we continue by
cancelling the second strongest user successively followed by all the other users. This scheme may
be extended to an iterative multistage IC scheme by repetition of the IC one or more times. In
the multistage IC scheme the estimated signal from the previous stage is added to the resulting
composite signal and the output of the matched filter is used to obtain a new estimate of the
signal, which in turn is cancelled. Hence, in this manner the interference can be further reduced
and the signal estimates improved.
The IC scheme is generalized to apply to the two multiple data rate schemes, mixed modulation
and multicodes. Mixed modulation refers to the use of different modulation formats to
change the information rate. That is, given a specific symbol rate, each user chooses a modulation
format, for example, BPSK, QPSK or any M-ary QAM format, depending on the required
data rate [23]. Multicodes is the second approach for implementing multiple data rates. It allows
the user to transmit over one or several parallel channels according to the requirements [23].
Hence, the user transmits the information synchronously employing several signature sequences.
To simplify notation IC is used both for interference cancellation and interference canceller.
This approach can, of course, also be used in combination with different modulation formats.
For our analysis and simulations we consider coherent demodulation, known time delays and
two types of channels: a stationary AWGN channel and a channel with frequency-nonselective
Rayleigh fading. Perfect power ranking is assumed in the performance analysis and in most of
the simulations, which implies knowledge of the channel gain for each signal. This knowledge is,
however, not used in the IC scheme itself.
The paper is organized as follows. In Section 2 we present the system model and the decoder
structure for rectangular M-ary QAM. A single- and multistage IC are then presented
for this model in Section 3. The performance of a single-rate system with IC in AWGN and
in flat Rayleigh fading is analysed in Section 4 and Section 5. Thereafter, the performance of
mixed modulation systems with IC is analysed in Section 6 and the corresponding analysis for
multicodes is given in Section 7. Numerical results are presented in Section 8 and in Section 9
we discuss performance improvements for high-rate users in mixed modulation systems. Finally,
the conclusions and future considerations are discussed in Section 10.
System Model and Decoder Structure
We consider a model for a system with square lattice QAM, where the received signal for K
users is modelled as
ae
r
d I
r
oe
(1)
which is the sum of all transmitted signals embedded in AWGN. d I=Q
k (t) is a sequence of rectangular
pulses of duration T with amplitude A I=Q
k;l , where I=Q denotes in-phase (I) or quadrature
branch. T is the inverse of the symbol rate, which is assumed to be equal for all users.
The amplitudes of the quadrature carriers for the k th user's l th symbol element, A I
k;l and A Q
k;l ,
generate together M equiprobable and independent symbols. They take the discrete values
A I=Q
since
amplitude levels are required for the I and Q components to form a signal constellation
for M-ary QAM. The energy of the signal with lowest amplitude is then 2E 0 . The k th user's
signature sequence that is used for spreading the signal in the I or Q branch is denoted c I=Q
k (t). It
consists of a sequence of antipodal, unit-amplitude, rectangular pulses of duration T c . The period
of all the users' signature sequences is hence there is one period per data symbol 2 . - k
is the time delay and OE k is the phase of the k th user. These are, in the asynchronous case, i.i.d.
uniform random variables in [0; T ) and [0; 2-). Both parameters are assumed to be known in the
analysis and in the simulations with known channel parameters. However, if complex spreading
2 In this paper N is also referred to as processing gain.
and despreading is used, OE k is only needed for the coherent detection and not for the NDDIC
scheme [18]. Furthermore, ! c represents the common centre frequency, ff k represents the channel
gain, which could be constant or Rayleigh distributed, and n(t) is the AWGN with two sided
power spectral density of N 0 =2.
Figure
1 shows the structure of the k th user's receiver when detecting the l th symbol. The
receiver is the standard coherent matched filter detector for M-ary QAM, from which we obtain
two decision variables, S I
k;l and S Q
k;l , which are the sufficient statistics for the I and Q components.
The low pass filter removes the double frequency components and for the I branch we get
ae
r
d I
r
sin OE koe
l
s I
where s I
k;l (t) is the baseband signal for the l th symbol of the k th user. A similar expression can
be derived for is the baseband equivalent of n(t).
PSfrag replacements
I (t)
Z II
k;l
Z IQ
k;l
Z QI
k;l
Z QQ
k;l
I
k;l
R
dtT
R
dtT
R
dtT
R
dt
c I
c I
sin
Figure
1: M-ary QAM receiver for DS-CDMA systems.
The I branch as well as the Q branch is correlated with both the I and Q signature sequences
of the k th user to form four different correlator outputs, which are the outputs at integer multiples
of T . These outputs contain all information about the amplitudes and they are used to form
the decision variables, S I
k;l and S Q
k;l . Let us consider detection of the first user's zeroth symbol.
Then, Z II
1;0 , from the first correlator, is determined as
Z II
r
ae
I II
oe
where A I
A I
, A Q
A Q
and the noise component is given by
I (t)c I
The sum of I II
k;1 terms in (4) represents the interference due to the remaining users and
each term can be expressed as
I II
A I
cos
A Q
sin
is a unit-amplitude, rectangular pulse of length T and - . The delay
- k is assumed, without loss of generality, to be shorter than - 1 . This is discussed more in the
next section. All the other Z 1;0 terms are derived in the same manner as above and we get the
decision variables
I
r
r
where N I=Q
1 is the noise term of the decision variable including both Gaussian noise and noise
caused by multiuser interference. The noise term depends actually on the symbol but for convenience
and reasons explained later we write N I=Q
1 instead of N I=Q
1;0 . It can be shown with the
help of trigonometric functions that the noise terms are given by
I
r
I II
r
I QQ
where OE
k;1 is the function given in (6), where OE k has been replaced by OE k;1 . I QQ
k;1 is
given by a similar expression with the appropriate changes in indices. The four noise components
are assumed to be uncorrelated Gaussian random variables. The two pairs of noise components,
1 and n IQ
1 and n QQ
1 , are uncorrelated only if the signature sequences, c I
are orthogonal, which is a mild restriction. We assume, however, that the correlation is zero also
for non-orthogonal signature sequences with large processing gain to enable analytical evaluation
of the system performance.
3 Non-Decision Directed Interference Cancellation
3.1 Single-Stage Interference Cancellation
The receiver for M-ary QAM is composed of a bank of filters matched to the I and Q signature
sequences of each user according to Figure 1. From the filter outputs we obtain the decision
variables, which are used both for determining which of the users is the strongest and in the
cancellation of that user's signal. The users are then decoded and cancelled in decreasing order
of their power. The detector is a coherent demodulator and we assume decision boundaries
according to minimum Euclidean distance. A block diagram of a receiver for M-ary QAM with
IC is shown in Figure 2. Without loss of generality, we assume that ff 1 ?
Hence, if the users have the same average transmitted power, the first user is the strongest. The
strongest user is cancelled first, since this user is likely to cause most interference and also the
one less affected by the interference from the other users. The decision variables of the strongest
user is used to estimate its baseband signal, which is subsequently cancelled from the composite
signal. In other words, the projection of the received signal in the direction of the spreading
sequence, is subtracted from the composite signal. The scheme continues with cancellation of
the second strongest user and thereafter all the users in order of their received power.
Select
Decode
MF K
PSfrag replacements
I (t)
I
1;l
1;l
I
2;l
2;l
I
K;l
K;l
I
k;l
I
k;l
k;l
k;l
c I
c I
sin
Figure
2: M-ary QAM receiver with interference cancellation.
For the sake of notational simplicity we assume that
that all the interfering symbols of the stronger users have been cancelled before the considered
symbol is decoded. That is, all the symbols prior to the zeroth have already been decoded
and cancelled and we can consider the detection of the zeroth symbol for all users. This is not a
restriction, since it does not affect the results of the analysis and it is not used in the simulations.
It only simplifies the expressions, thus we avoid considering different symbols for different users.
We use the decision variables in (7) to estimate the baseband signal of the first user's zeroth
symbol and subsequently cancel it from the composite signal. The cancellations are however not
perfect: besides the desired signal, the filter output also contains Gaussian noise and noise caused
by MAI, and, for each cancellation, noise is projected in the directions of the other users in the
system. Nevertheless, the scheme has the advantage of being simple. We proceed by cancelling
the users successively and after cancellations the decision variable for the h th user is given
by
r
After cancellation of the h th user, the resulting baseband signal of the I branch is expressed as
I
where there are h cancelled and K \Gamma h remaining zeroth symbols. When h is equal to 1 the term
I
0;0 (t) corresponds to the remaining baseband signal after cancellation of all symbols prior to
the zeroth, and consequently, we get ffi I
1;0 (t) after cancelling the first user's zeroth symbol. We
rewrite the expression in the following way
I
s I
I
ae
r
A I
r
A Q
oe
\Theta N I
where the first sum is the remaining baseband signal after cancellation of all symbols prior to
the zeroth. The second sum is the additional noise caused by imperfect cancellation of these
symbols and it is defined as
I
\Theta N I
k;l c I
k;l c Q
where we have given the noise term N I=Q
k;l an additional subscript, l, specifying the symbol. This
index indicates that the noise term do vary over time. The third term in (11) is the in-phase
Gaussian noise and the subsequent sum is the cancelled baseband signals corresponding to the
zeroth symbol of the h strongest users in the system. Finally, we have the additional noise
components caused by imperfect cancellation of these h users' zeroth symbols. The omission of
a second subscript in the noise term in the last line of (11) is explained below.
The total noise component in (9) for the h th user in the I branch is
I
r
I II
k;h
\Theta
I
where the first sum consists of noise caused by the remaining interfering users, the second term
is Gaussian noise and the last sum is the resulting noise caused by imperfect cancellations. N Q
is given by a similar expression. The correlation terms, J II
j;h and J QI
j;h , are given by
J II
where the correlation is over the noise caused by imperfect cancellation of the symbols -1 and
0 of the j th user, since we assume This is illustrated in Figure 3, where
shaded lines indicate cancelled symbols. However, since we consider a slowly fading channel,
which implicates that the channel changes slowly and the interference-power can be regarded as
equal for two subsequent symbols, we do not distinguish between, e.g., the noise terms N I
I
j;0 . This is the reason for simply using N I
j and N Q
in (11) and (13).
-T
PSfrag replacements
Figure
3: Cross-correlation between users in an asynchronous system.
3.2 Multistage Interference Cancellation
The derived single-stage scheme may be iterated to form a multistage scheme. The motivation
for this is that the users received with high power have the advantage of being strong but they are
still exposed to interference from the weaker users. Hence, if better estimates of the strong users'
signals can be achieved the estimates and the cancellations of the weak users' signals would be
improved. Therefore, iterating the IC scheme can improve the performance of the whole system.
We still have to keep in mind that in our simple IC scheme we make non-decision directed or
'soft' cancellations using the matched filter outputs. The effect is that the Gaussian noise is not
removed through hard decisions and for each cancellation a small amount of noise is projected
in the directions of the other users in the system. Hence, the performance of a system can be
improved through cancellation of the MAI by employing a limited number of IC stages, but after
the optimum number of stages the performance will degrade. However, simulated results in [18]
and analytical results in [24] show that the multistage NDDIC in a synchronous system performs
better than the decorrelator [6] after a limited number of stages and that they are asymptotically
equivalent.
To simplify the notations when describing the multistage IC, we drop the subscript for the
symbol and replace it with a subscript that represent the stage. That is, the first subscript of
a variable defines the user and the second subscript defines the stage and the assumption of
detection of symbol zero is implicit. To describe the multistage IC scheme we use an interference
cancellation unit (ICU), which is illustrated in Figure 4 using a simplified block diagram. First
we add the estimated baseband signal from the previous stage (denoted s k;i\Gamma1 in Figure 4) to the
resulting composite signal. Then we use the output of the matched filter to obtain a new estimate
of the signal, which in turn is cancelled. The variable r k;i defines the composite baseband signal
after cancellation of user denotes the I and Q signature sequences, which are used
to regenerate the estimated baseband signal s k;i of the k th user. The scheme is repeated for all
the users in the system for the desired number of stages. This is shown in Figure 5, where each
block, IC k (i) , is the k th user's ICU at the i th stage.
PSfrag replacements
r i;k+1
r i;k
s i\Gamma1;k
s i;k
y i;k
Figure
4: Linear non-decision directed interference cancellation unit.
The corresponding expression to (10) for the resulting baseband signal in multistage IC is
determined as
I
h;i a I
h;i a Q
I
a I
where a I
. a Q
h is given by a similar expression using the sine-function.
The decision variable for the h th user at the i th stage is given by
r
PSfrag replacements
r 1;1
r 1;2
r 1;K
r 1;K+1
r 2;1
r 2;2
r 2;K
s 1;1
s 1;2
r 2;K+1
r i;1
r i;2
r i;K
.
ICU K (1)
ICU K (2)
ICU K (i)
Figure
5: Multistage successive interference cancellation.
where N I=Q
h;i contains only Gaussian noise and noise caused by imperfect cancellations. For the
I branch the noise term is given by
I
\Theta N I
\Theta N I
where the first term is Gaussian noise, the first sum is the noise caused by imperfect cancellation
at the i th stage and the second sum is the noise caused by imperfect cancellation at the (i \Gamma 1) th
stage. J II
j;h and J QI
j;h are defined in (14) and N Q
h;i is given by an expression similar to (17).
3.3 Ranking of the Users
In this paper we do not consider algorithms for ranking the users. We assume perfect ranking
in the analysis and in most of the simulations. In simulations where we estimate the channel,
power ranking is performed before the IC using pilot symbols for initial channel estimates. In
mixed modulation systems, the QAM users are scaled with their average power giving a ranking
according to the channel gain. Discussions about ranking is found in [15, 19].
Performance Analysis of Systems in AWGN
In this section we analyse the performance of a single-rate system in an AWGN environment.
To analyse the scheme, we let the noise components caused by MAI be modelled as independent
Gaussian noise [25, 26]. We have chosen to use a Gaussian approximation partly since it is
commonly used and partly because it yields a practical way to evaluate the performance of
an asynchronous system. When a Gaussian approximation is used, an increase in noise and
interference variance immediately leads to an increase in error probability, which is likely to occur
also for the true distribution. Absolute performance, however, is likely to be too optimistic [26].
In this section we consider an AWGN channel where all the users are received with equal power,
which corresponds to perfect power control. This will make the ranking of the users completely
random and the order of cancellation will change continuously. Thus, the average probability of
symbol error for each user is obtained taking the average of the symbol error rates (SER) for all
the users.
4.1 Single-Stage Interference Cancellation
First we calculate the variance of the decision variable of the I branch conditioned on ff, i.e.,
I
\Theta N I
where N I
h is defined in (13) and ff includes all ff k . ff k is constant for stationary AWGN channels
but we write the variance conditioned on ff to enable the use of (18) in the analysis for Rayleigh
fading channels. With the assumption that all the Gaussian noise terms are uncorrelated (which
is true if c I
k and c Q
are orthogonal), it can be shown that all the random variables in N I=Q
are
independent and with zero mean. Consequently, we model N I=Q
h as an independent Gaussian
random variable with zero mean and variance j I=Q
h . Rewriting (18) we get
I
h =Var
I II
k;h
I
J II
where the first term is the variance of the Gaussian noise, the first sum results from the MAI
and the last sum is due to imperfect cancellations. For deterministic signature sequences and
rectangular chip pulses, the variance in (19) is
I
k;h
I
1)=3 is the normalized average transmitted power in each branch,
k;h
is the average interference [25] between the signature sequences in the I branches of user k and
h. We have also assumed that - j;h and OE j;h are uniformly distributed over [0; T ) and [0; 2-). For
random sequences the variance is
I
I
where we have used j I
j and that the average interference is 2N 2 [25].
It is straightforward to obtain the probability of error from the theory of single transmission
of QAM signals over an AWGN channel [27] when the distribution of the MAI is approximated
to be Gaussian. We use the variance given in (20) or (21) and define a signal-to-noise ratio, ae I
for the h th user in the ideal coherent case as
ae I
r
I
The probability of error for transmission over the I branch is then [27]
I
Q(ae I
where the Q-function defines the complementary Gaussian error function 3 . P Q
e h is obtained in a
similar manner and together they give the SER
Finally, the average probability of symbol error is obtained taking the average of all the users'
SERs.
4.2 Multistage Interference Cancellation
The multistage scheme is analysed in the same manner as the single-stage scheme. The expression
for N I
h;i in (17) is used in (18) to obtain the variance of the decision variable. The variance for
deterministic sequences is given by
I
I
k;h
I
while for random sequences we get
I
I
I
j;i is used. The variance in (25) or (26) is used in (22) to obtain the signal-to-noise
ratio, ae I
h;i , which in turn is used together with (23) and (24) to calculate the average probability
of symbol error for the multistage scheme.
Performance Analysis of Systems in Flat Rayleigh Fading
In this section we analyse single-rate systems in flat Rayleigh fading. That is, systems where the
users' signals are received through independent, frequency-nonselective, slowly fading channels.
This model is suitable in areas with small delay spread and for mobiles with slow speed (small
Doppler frequency). These conditions also make estimation of OE k and ff k feasible, which is needed
for coherent detection and to obtain decision boundaries for M-ary QAM [27].
The expressions for noise variances and error probabilities, that were derived in the previous
sections, are all conditioned on the channel gain. We will use them as in [14, 15] to derive the
error probabilities for flat Rayleigh fading channels.
5.1 Single-Stage Interference Cancellation
The users' amplitudes are assumed to be Rayleigh distributed with unit mean square value. That
is, the average power of the received signals at the base station is equal, assuming perfect power
control for shadowing and distance attenuation. To obtain the unconditional probability of error,
I
, we average the conditional probability of error over the fading as follows
I
Z 1P I
where f ff h (x) is the pdf of the h th ordered amplitude, which is obtained using order statistics [28]
and stated here for convenience, i.e.,
We define a conditional signal-to-noise ratio, ae I
h , for the h th user in the I branch according to
(22). The only difference is that j I
h is replaced by E ff [j I
h ], which is the expected value of the
conditional variance with respect to ff. The expected value is taken with respect to all ff
When using a Gaussian approximation we calculate the second moment of the MAI and add it
to the variance of the Gaussian noise. The expected value is therefore determined as
\Theta
I
\Theta
\Theta
I
\Theta ff 2
is the mean square value of the ordered amplitude, ff k , given by
\Theta ff 2
It should be noted that this integral, as well as the integral in (27), is calculated numerically
in the analysis. Taking the average of the different users' SERs, yields the average probability
of symbol error. This is the proper measure of performance, since the order of cancellation will
change with the fading and the average of all users will be the same as the time average for each
user.
5.2 Multistage Interference Cancellation
The multistage scheme is analysed in the manner described above using order statistics. The
expected value, with respect to ff, of the variance of the h th user's decision variable at the i th
stage is given by
\Theta j I
\Theta j I
\Theta j I
when using random spreading sequences. The variance in (31) is used in (22) to form ae I
h;i , which
in turn is used in Eqns. (23), (24) and (27) to calculate the unconditional SER.
6 Mixed Modulation Systems with IC
Mixed modulation is one possible scheme that can be used when handling multirate systems [23].
In this scheme, the information rate is determined by the modulation format, which can be BPSK,
QPSK or any M-ary QAM format. Accordingly, if a user transmits with a specific data rate using
BPSK modulation, the user would change to QPSK modulation when a twice as high information
rate was required. In the following paragraphs we evaluate the performance of a system where
the users employ different modulation formats, in this case, a combination of BPSK, QPSK and
16-QAM.
6.1 Mixed Modulation Systems with Single-Stage NDDIC
We consider a system where we have K 1 BPSK, K 2 QPSK and K 3 16-QAM users. To compare
different forms of modulation, we let the transmitted bit energy, E b , be equal for all users
independent of modulation format. Rewriting the energy E 0 as a function of E b yields
which is valid for M-ary QAM. For BPSK modulation, . The expression for the variance
of the decision variable for a BPSK user is given in [15], and it is reproduced here for convenience
together with the expression for a M-ary QAM user, i.e.,
I
If we define M (16-QAM), we can express the signal-to-noise value
conditioned on ff, for the h th QPSK user as
ae I
s
I
is the variance of the decision variable of the h th QPSK user in the mixed modulation
system. is the number of cancelled BPSK, QPSK and 16-QAM users, respectively.
Rewriting (34), using (33) to derive j I
, we get
ae I
(ae I
K3
(ae I
K1
ae \Gamma2
where the noise variance caused by interference from QPSK users is found on the first line, from
16-QAM users on the second line and from BPSK users on the last line. For BPSK and 16-QAM
users we get similar expressions.
6.2 Mixed Modulation Systems with Multistage NDDIC
The variance of the noise in mixed modulation systems with multistage IC is derived in a similar
manner as for the single-stage case. The signal-to-noise ratio, ae I
h;i , for the h th QPSK user is
formed using (34) together with the for multistage modified expressions in (33). We then obtain
ae I
(ae I
(ae I
(ae I
(ae I
ae \Gamma2
ae \Gamma2
where the first line results from imperfect cancellation of QPSK users, the second line from 16-
QAM users and the last line from BPSK users. The signal-to-noise ratios for BPSK and 16-QAM
users are given by similar expressions.
6.3 Performance Analysis for Stationary Channels
To evaluate the performance of a mixed modulation system, we first calculate the bit error rate
(BER) for each user before the BER for the whole system can be derived. For BPSK users the
BER is determined by
\Theta -
To compare the results obtained for QAM users with BPSK users we assume a Gray encoded
version of M-ary QAM. The log 2 M-bit Gray codes differ only in one bit position for neighbouring
symbols, and when the probability of symbol error is sufficiently small, the probability of
mistaking a symbol for the adjacent one vertically or horizontally is much greater than any other
possible symbol error. The SER is easily derived using (23) in (24) and then the BER is obtained
through
To get the BER for the whole system, each user's BER is weighted together as
are the data rates for the BPSK, QPSK and 16-QAM users, respectively,
and the P b terms are the individual BERs.
6.4 Performance Analysis for Rayleigh Fading Channels
In mixed modulation systems, the average transmitted energy per bit is equal for all users
independent of modulation format. The average power of the QAM users is therefore log 2 M
times higher than for the BPSK users. For the mixed modulation scheme, we have ordered
the users according to the channel gain and not the received power. This will improve the
performance of the high-rate users, which are more sensitive to noise, and, consequently, it
improves the performance of the whole system. However, the improvement is mainly noticed for
single-stage IC, since the effect of ranking is less important when the interference in the system
is due only to imperfect cancellations.
The expected values, with respect to ff, of the variances in (33) are used in (34) to obtain
the corresponding expressions of the conditional signal-to-noise ratio. The signal-to-noise ratio
is then used to derive the error probability for each user according to (23) or (37), depending
on the user's modulation format, and the unconditional error probability is obtained using (27).
Finally, the weighted BER is obtained using Eqns. (37) - (39).
The corresponding signal-to-noise ratio, ae I
h;i , in (36), after taking the expected value of the
variances with respect to ff, is used to calculate the performance of the multistage IC. The
procedure for obtaining the BER of the whole system is then the same as described above.
7 Multicode Systems with IC
Multicodes is the second of the two considered multirate schemes [23]. In this scheme we let each
user transmit information simultaneously over as many parallel channels as required for a specific
data rate. Thus, a user employs several spreading codes and the information is transmitted
synchronously at a given base rate. If there are users with very high rates in the system, there
can be a large number of interfering signals. However, this affects the high-rate user itself very
little, since sequences with low cross-correlation or orthogonal sequences can be used, in which
case the synchronous signals interfere little or nothing with each other.
7.1 Synchronous Systems with Single-Stage NDDIC
We consider first a system with only synchronous transmission. We have \Delta parallel channels with
identical channel parameters, since we assume that the signals are transmitted simultaneously
from the same location. In other words, the relative time delay and phase between the channels
are equal to zero. The variance of the noise component in (13) when both - k;h and OE k;h is zero
is given by
I
I
k;h (0) is the periodic cross-correlation function [25]. The expression for random sequences
follows using E[(' II
7.2 Multicode Systems with Single-Stage NDDIC
We consider a system with K users, where each user, k, transmits over \Delta k channels. The total
number of information bearing channels is then equal to the sum of all \Delta k and there are both
synchronous and asynchronous interferers. When deterministic sequences are used, all, or the
major part, of the interference comes from the asynchronous users, depending on if the sequences
are orthogonal or not. Therefore, cancellation of parallel signals is excluded and instead we
consider a receiver where each user's parallel signals are decoded and cancelled simultaneously.
Combining (20) and (40) we can write the variance of the decision variable of the h th user's
th signal in a QAM system with both asynchronous and synchronous transmission as
I
r II
I
r II
r QI
where r II
denotes the average interference between the in-phase signals, and k j denotes the
th channel of the k th user and h g the g th channel of the h th user. The variance in (41) is used
together with Eqns. (22) - (24) to obtain the probability of error in stationary AWGN channels.
7.3 Multicode Systems with Multistage NDDIC
The use of orthogonal spreading sequences does not improve the performance considerably for
single-stage IC, since the interference caused by the remaining asynchronous users in each step
of the cancellation scheme determines the performance. However, when employing a multistage
scheme it is preferable to use orthogonal spreading sequences, because after the first complete
stage of IC the MAI is partly rendered for noise caused by imperfect cancellation. Hence, there
are not any remaining users that dominates the interference. The corresponding variance of
the decision variable for multistage IC and orthogonal spreading sequences is given by a similar
expression to (25), i.e.,
I
hg ;i =6N 3
I
I
where g is one of the signals belonging to user h. The average error probability for stationary
AWGN channels is then obtained using (42) in Eqns. (22) - (24) as above.
7.4 Performance Analysis of Multicode Systems in Fading
The performance of a multicode system in fading is analysed using order statistics as described
in Section 5. The K users are ordered, each one with \Delta k parallel channels, according to their
total received power. The same pdf, f ff k
(x), and mean square value, E ff [ff 2
k ], are assigned to all
the \Delta k channels of user k and, for the single-stage IC, (41) is used to obtain the expected value
of the variance with respect to ff. The error probability is then derived using Eqns. (22) - (24)
and (27).
The performance of the multistage scheme is evaluated for orthogonal spreading sequences.
Thus, the remaining noise consists only of the noise caused by imperfect cancellation of the
asynchronous users and Gaussian noise. The expected value with respect to ff of the variance in
(42) is used to obtain the error probability using order statistics as described above.
8 Numerical Results
8.1 Simulations
All presented simulations are for asynchronous systems. We consider both stationary AWGN
channels and slow, frequency-nonselective Rayleigh fading channels. The stationary AWGN
channel corresponds to a system with perfect power control and, for the Rayleigh fading channel,
we assume average power control for distance and shadow fading. The average received power is
assumed to be equal for all users in single-rate systems and it is equal for all channels in multicode
Table
1: Parameter settings for the simulations.
Simulation Parameters Single/Mixed Mod. Multicodes
Channel Stationary AWGN/
Rayleigh Fading
Stationary AWGN/
Rayleigh Fading
Detection Coherent Coherent
Modulation BPSK, QPSK,
Channel Estimation Known Channel/
Pilot Symbols
Known Channel/
Pilot Symbols
Ranking Perfect Ranking/
MF Outputs
Perfect Ranking/
MF Outputs
Signature Sequences Random Codes Orth. Gold Codes
Time Delays Perfect Estimates Perfect Estimates
Processing Gain 127 128
Block Length
systems. In mixed modulation systems, the E b =N 0 value is the same for all users, which makes
the M-ary QAM users log 2 M times stronger in average power than the BPSK users. The IC
scheme is performed block-wise on the data and it is assumed that the channel does not change
during the transmission of a block, which corresponds to slow vehicle speed. We also assume that
pilot symbols are added between the data blocks in those cases where we consider estimation
of channel parameters. The estimate is then obtained from an average of the pilot symbols in
the beginning as well as at the end of each block of data. For QPSK modulation, the k th user's
channel estimate is obtained from
Y I
where Y I
k;p and Y Q
k;p are the matched filter outputs when the received baseband signal is despread
with c I
k and c Q
k respectively. Moreover, I p denotes the indices for the P complex pilot symbols,
which are defined as 1 + j. For QAM we get a similar expression. Furthermore, in simulations
using known channel parameters we assumed perfect ranking of the users. On the other hand, in
simulations with estimated parameters, ranking is performed using the pilot symbols to obtain
initial channel estimates. Note that ff k is only used to determine the decision boundaries of the
QAM users and not in the IC scheme. All the resulting parameter settings for the simulations
are given in Table 1.
All the simulated systems are chip-rate sampled, which limits the possible time lags between
users to multiples of chip times. It should also be pointed out that, as a consequence of reduced
simulation time, the confidence level is not sufficiently high to give completely accurate results for
BER values below 10 \Gamma3 . However, we have considered a higher confidence level in the simulations
of single-rate and mixed modulation systems in Figures 8, 9 and 10.
8.2 Stationary AWGN Channels
8.2.1 Multicode Systems
The performance of a multicode system in AWGN is shown in Figure 6. There are 15 asyn-
Average
Bit
Probability
Processing
Orth. Gold Sequences
MF Rec.
Simulation
Analysis
Single BPSK
Figure
Performance of a multicode system with 15 QPSK users and two parallel channels per
user. The graph shows analytical and simulation results for one, two and five stages of IC and
simulation results for an MF receiver.
chronous QPSK users in the system. Each user transmits over two parallel channels and orthogonal
Gold codes of length 128 are used. The graph shows analytical and simulation results for
one, two and five stages of IC and we can see that a Gaussian approximation is too optimistic
for the single-stage IC, but the analytical and simulation results for two and five stages of IC
agree well. A Gaussian approximation is probably good for evaluating the performance of the
users cancelled first in the single-stage IC scheme, however, as the scheme proceeds, the Gaussian
approximation of the MAI becomes less accurate, especially for high E b =N 0 values. This is due to
the relatively strong interference, compared to the Gaussian noise, from a small group of users.
That is, the central limit theorem does not apply. For multistage IC a Gaussian approximation
is good, since then the interference is due only to imperfect cancellation of the users. As can be
seen in the graph, after the second stage of IC, most of the performance gain is acquired and the
performance is close to the single-user bound.
8.2.2 Mixed Modulation Systems
Simulations as well as analytical results of a mixed modulation system is shown in Figure 7.
There are 20 BPSK, 10 QPSK and 5 16-QAM users in the system and random sequences of
length 127 are used. The analytical results are obtained from an average of 100 rankings, where
s
Mix (20/10/5)
MF Rec.
Simulation
Analysis
Single BPSK
Figure
7: Performance of a mixed modulation system with
users. The graph shows analytical and simulation results for one, two and five stages of IC and
simulation results for an MF receiver.
each ranking gives a different ordering of the users according to the modulation format. That
is, when the users in the mixed modulation system are ranked according to the channel gain,
the order according to modulation format is completely random. As noted from the results, a
Gaussian approximation is not reliable for single-stage IC, and using higher modulation formats
increases the average BER compared to multicodes. The effect is noticed especially for high
values of E b =N 0 . However, after five stages of IC the analytical and simulation results agree
well. Studying the average BER of the BPSK, QPSK and 16-QAM users respectively we see
that the BER of the 16-QAM users clearly dominates the performance and causes the relatively
high average BER in Figure 7. Nonetheless, for the two- and five-stage IC we get a considerable
reduction in average BER compared to the single-stage IC and the MF receiver.
8.3 Flat Rayleigh Fading Channels
8.3.1 Single-Rate Systems
The average BER of a single-rate system with 20 QPSK users in Rayleigh fading is shown in
Figure
8. The length of the random sequences is 127. The results from one, two and five
stages of IC are compared with the single-user bound for BPSK users and the results from the
corresponding system employing a conventional detector. The graph shows that a Gaussian
approximation is too optimistic for a single-stage IC, but it works well for E b =N 0 values up to
20 dB. For multistage IC, the performance is close to the single-user bound and a Gaussian
approximation works better, even though the results do not agree perfectly. Nevertheless, in [15]
Patel and Holtzman show simulation results for BPSK users and single-stage IC that support
the analytical results, which are obtained employing a Gaussian approximation, surprisingly well.
We have, however, not been able to reproduce their results. The results for BPSK and 16-QAM
s
users
Simulation
Analysis
MF Rec.
Single BPSK
Figure
8: Performance of a QPSK system with 20 users in Rayleigh fading. Analytical and
simulation results for an MF receiver and an IC with one, two and five stages are shown.
users are similar to the results presented in Figure 8 for QPSK users. The performance of a
multistage IC is in both cases close to the respective single-user bound.
8.3.2 Mixed Modulation Systems
Average analytical performance and simulation results of a mixed modulation system with 20
users and random sequences of length 127 are shown in Figure 9.
The results show that the analysis is too optimistic for the single-stage IC but the accuracy
improves with an increasing number of stages. After five stages of IC the analytical performance
is 1 dB from the single-user bound. The simulation results for the same mixed modulation system
are shown in Figure 10, where average system performance is presented together with average
BER for each modulation format. The figure shows great improvement in performance for each
additional stage of IC and after five stages the average BER of the different users is close to their
respective single-user bound.
8.3.3 Multicode Systems
Figure
depicts analytical and simulation results for a multicode system with 15 QPSK users,
two parallel channels per user and orthogonal Gold codes of length 128. The correspondence
between the curves is relatively good for single-stage IC, but for multistage IC, the results agree
well for E b =N 0 values up to 20 dB. In this region, the multicode system with multistage IC has
a performance within 1 dB of the single-user bound.
In
Figure
12 we compare the performance of a multicode system (15 QPSK users and two
parallel channels per user), with the performance of two single-rate systems (30 QPSK users and
is equal for all users in the systems. The simulation results
s
Mix (20/10/5)
MF Rec.
Simulation
Analysis
Single BPSK
Figure
9: Performance of a mixed modulation system with
users in Rayleigh fading. Both analytical and simulation results are shown.
for one, two and five stages of IC show that QPSK modulation together with two parallel channels
are preferable to 16-QAM. However, it should be noted that the 16-QAM system outperforms
the other two systems for a two-stage IC in the high E b =N 0 region, where the interference is
limiting instead of the Gaussian noise. The performance of the 16-QAM system is then close to
its single-user bound. The other two systems, the asynchronous QPSK system and the multicode
system, have almost the same performance. They perform well and both systems are within 1
dB of the single-user bound for a five-stage IC. The difference in performance for high E b =N 0
values is presumably mainly due to inaccuracy in the simulation results.
8.3.4 Systems with Parameter Estimation
In simulations with channel estimation, the channel parameters were estimated using pilot symbols
according to (43). In this case we did not assume perfect ranking. The order of the users was
instead determined from initial channel estimates as described in Section 3.3. In Figure 13, we
compare the simulated performance of a multicode system using estimated channel parameters
to a system where the channel parameters are assumed to be known. The degradation due to
estimated channel parameters is several dB for a single-stage IC but for two- and five-stage ICs
the degradation is only about 1 dB. It can also be noted that the degradation in systems employing
a conventional detector is very large. Note that E b is the energy per bit on the channel.
That is, there is no compensation for the energy used for the pilot symbols.
Random Sequences
Mix (20/10/5)
Ave. QPSK
Ave. 16-QAM
MF Rec.
Single BPSK
Figure
10: Performance of a mixed modulation system with
users in Rayleigh fading. Simulation results show average system performance and average BER
for each modulation format.
9 Performance Improvements for M-ary QAM Users in Mixed
Modulation Systems
A disadvantage of using mixed modulation when handling multiple data rates is that high-rate
users (16-QAM users) have a higher average BER than low-rate users (BPSK and QPSK users)
for the same E b , as indicated in Figure 10. A possible way to reduce the BER for the 16-QAM
users is to increase their transmitted power such that they are received with higher E b than the
BPSK and QPSK users.
The average BER for a mixed modulation system where the users have unequal energy per
bit is shown in Figure 14. The system has users and the
length of the signature sequences is 127. The E b =N 0 value for the 16-QAM users is increased in
steps of 2 dB relative to the E b =N 0 for the other users, which is kept constant to
value for the E b =N 0 was chosen since it seemed to give a relevant bit error probability for the
system. We have not evaluated the performance for other fixed E b =N 0 values because of the time
consuming simulations. The graph depicts that the average BER of the 16-QAM users may be
decreased, by increasing the E b of these users, with almost no degradation of the performance of
the BPSK and QPSK users. For the single-stage IC a minor degradation can be noticed for the
BPSK and QPSK users but for the five-stage IC the performance of the BPSK and QPSK users
is unchanged. That is, after five stages of IC, most of the interference is removed and the signals
are separated in the signal space independently of their power. Accordingly, increasing the power
of the 16-QAM users with an amount corresponding to an increase of E b =N 0 by approximately 2
dB, the average BER for the 16-QAM users is the same as the average BER for the BPSK and
QPSK users for both one and five stages of IC.
Orth. Gold Sequences
QPSK, 15 users, P=2
MF Rec.
Simulation
Analysis
Single BPSK
Figure
11: Performance of a multicode system with 15 QPSK users and two parallel channels per
user. Simulation and analytical results for an IC with one, two and five stages and simulation
results for an MF receiver are shown in the graph.
Conclusions
The development in mobile communications makes it essential to evolve an efficient system capable
of supporting both multiuser detection and variable data rates for the users. The optimum
detector is too complex to be implemented in a practical system and the conventional matched
filter detector does not perform well without stringent power control. Suboptimal multiuser detectors
that has less computational complexity than the optimal detector, but performs better
than the conventional detector, are therefore required.
In this paper we have demonstrated the use of M-ary rectangular QAM with multistage
non-decision directed interference cancellation (NDDIC), which has computational complexity
that is linear in the number of users and stages. The two multiple data rate schemes, mixed
modulation and multicodes, were analysed for both stationary AWGN channels and flat Rayleigh
fading channels, and analytical performance estimates using a Gaussian approximation of the
MAI were presented. The analytical results for flat Rayleigh fading channels agreed well with the
results from computer simulations for E b =N 0 values up to 20 dB and the correspondence between
the results improved with increasing number of IC stages. The performance of the multistage IC,
even for systems with many users, was then close to the single-user bound. Consequently, the
multistage IC scheme yields a considerable increase in performance compared to the conventional
matched filter detector.
Considering a mixed modulation system, we found that the users have different average BER
depending on their modulation format. That is, the BPSK and QPSK users have lower BER
than the 16-QAM users, like in ordinary single-user transmission. However, a small increase in
received energy per bit for the 16-QAM users (relative to the BPSK and QPSK users) decreases
QPSK, 15 users, P=2
QPSK, users
users
MF Rec.
Single BPSK
Figure
12: Performance of three different systems in Rayleigh fading. Simulation results for a
multicode system (15 QPSK users, compared with two single-rate systems (30 QPSK
users and 15 16-QAM users).
their BER without great effect on the other users in the system. On the other hand, if we consider
a multicode system, the users' average performance is equal when all users have the same number
of parallel channels. To take advantage of the synchronous signalling between a user's parallel
channels, orthogonal signature sequences can be used, which improves the overall performance
and makes the high-rate users perform better than the low-rate users. To conclude, comparing
the performance of the two multirate schemes for the same number of IC stages, multicodes is
the preferable scheme. Although, the greatest system flexibility is obtained if the two schemes
are combined in such a way that for each new user that is added to the system a decision is made
in favour of a number of parallel channels and/or a certain modulation format.
Future work within this project will be to study multistage IC schemes together with multi-path
Rayleigh fading channels. The inclusion of channel coding and channel estimation will also
be investigated. Some of this work has been carried out since this paper was first submitted. It
can be found in [24, 29, 30].
Acknowledgment
The authors would like to acknowledge Karim Jamal at Ericsson Radio Systems for his initial
assistance in obtaining the simulation results.
This work was supported by the Swedish National Board of Industrial and Technical Development
project 9303363-5.
Orth. Gold Sequences
QPSK, 15 users, P=2
Known Channel
Est. Channel
MF Rec.
Single BPSK
Figure
13: Performance of a multicode system with 15 QPSK users and two parallel channels
per user. Simulation results for known and estimated channel parameters for an IC with one,
two and five stages and an MF receiver are shown in the graph.
--R
"Design study for a CDMA-based third-generation mobile radio system,"
"On the capacity of a cellular CDMA system,"
"Near-far resistance of multiuser detectors in asynchronous chan- nels,"
Optimum multi-user signal detection
"Minimum probability of error for asynchronous Gaussian multiple-access chan- nels,"
"Linear multiuser detectors for synchronous code-division multiple-access channels,"
"Multistage detection in asynchronous code-division multiple access communications,"
"A family of suboptimum detectors for coherent multi-user communications,"
"Multiuser detection for CDMA systems,"
"A spread-spectrum multiaccess system with cochannel interference cancellation for multipath fading channels,"
"MMSE interference suppression for direct-sequence spread spectrum CDMA,"
"Decorrelating decision-feedback multi-user detector for synchronous code-division multiple access channel,"
"Analytic limits on performance of adaptive multistage interference cancellation for CDMA,"
"Analysis of successive interference cancellation in M-ary orthogonal DS-CDMA system with single path rayleigh fading,"
"Analysis of a simple successive interference cancellation scheme in a DS/CDMA system,"
"CDMA with interference can- cellation: A technique for high capacity wireless systems,"
"Pilot symbol-assisted coherent multi-stage interference canceller for DS-CDMA mobile radio,"
"Multi-stage serial interference cancellation for DS-CDMA,"
Interference cancellation for DS/CDMA systems in flat fading chan- nels
"Successive interference cancellation schemes in multi-rate DS/CDMA systems,"
"Multistage interference cancellation in multi-rate DS/CDMA systems,"
"Decorrelating detectors for dual rate synchronous DS/CDMA systems,"
"On schemes for multirate support in DS-CDMA systems,"
"Convergence of linear successive interference cancellation in CDMA,"
"Performance evaluation for phase-coded spread-spectrum multiple-access communication-Part I: System analysis,"
"Performance of binary and quaternary direct-sequence spread-spectrum multiple-access systems with random signature sequences,"
"Multistage interference cancellation in multirate DS/CDMA on a mobile radio channel,"
"Joint interference cancellation and Viterbi decoding in DS-CDMA,"
--TR
Minimum probability of error for asynchronous Gaussian multiple-access channels
On Schemes for Multriate Support in DS-CDMA Systems
Analysis of Successive Interference Cancellation in M-ary Orthogonal DS-CDMA System with Single Path Rayleigh Fading | multiple data rates;multi-user detection;multicodes;direct-sequence code-division multiple access DS-CDMA;mixed modulation;interference cancellation |
609397 | A Channel Sharing Scheme for Cellular Mobile Communications. | This paper presents a channel sharing scheme, Neighbor Cell Channel Sharing (NCCS) , based on region partitioning of cell coverage for wireless cellular networks. Each cell is divided into an inner-cell region and an outer-cell region. Cochannel interference is suppressed by limiting the usage of sharing channels in the inner-cell region. The channel sharing scheme achieves a traffic-adaptive channel assignment and does not require any channel locking. Performance analysis shows that using the NCCS scheme leads to a lower call blocking probability and a better channel utilization as compared with other previously proposed channel assignment schemes. | Introduction
One of the major design objectives of wireless cellular communication systems is high network
capacity and flexibility, while taking into account time-varying teletraffic loads and radio link
quality. The limited radio frequency spectrum requires cellular systems to use efficient methods
to handle the increasing service demands and to adapt system resources to various teletraffic
(referred to as traffic) in different cells. Many current cellular systems use the conventional
radio channel management, fixed channel assignment (FCA), where a set of nominal channels
is permanently allocated to each cell for its exclusive use according to traffic load estimation,
cochannel and adjacent channel interference constraints [1]. Due to the mobility of users, the
traffic information is difficult to accurately predict in any case. As a result, the FCA scheme is
not frequency efficient in the sense that the channel assignment cannot adapt to the dynamically
changing distribution of mobile terminals in the coverage area. In order to overcome the deficiency
of FCA, various traffic-adaptive channel assignment schemes have been proposed, such as dynamic
channel assignment (DCA) [2]-[4] and hybrid channel assignment (HCA) [5]. In centralized DCA
schemes, all channels are grouped into a pool managed by a central controller. For each call
connection request, the associated base station will ask the controller for a channel. After a
call is completed, the channel is returned to the channel pool. In distributed DCA schemes,
a channel is either selected by the local base station of the cell where the call is initiated, or
selected autonomously by the mobile station. A channel is eligible for use in any cell provided
that signal interference constraints are satisfied. Since more than one channel may be available in
the channel pool to be assigned to a call when required, some strategy must be applied to select
the assigned channel. Although the DCA schemes can adapt channel assignment to dynamic
traffic loads, it can also significantly increase network complexity due to cochannel cell locking
and other channel management, because it is a call-by-call based assignment. In order to keep
both cochannel interference and adjacent channel interference under a certain threshold, cells
within the required minimum channel reuse distance from a cell that borrows a channel from the
central pool cannot use the same channel. DCA also requires fast real-time signal processing and
associated channel database updating. A compromise between the radio spectrum efficiency and
channel management complexity is HCA, which combines FCA with DCA. In HCA, all available
channels are divided into two groups, FCA group and DCA group, with an optimal ratio. It has
been shown that both DCA and HCA can achieve a better utilization of radio channel resources
than FCA in a light traffic load situation, due to the fact that both schemes can adapt to traffic
load dynamics. However, they may perform less satisfactorily than FCA in a heavy traffic load
situation due to the necessary channel locking [2]-[5]. Another approach to adaptive channel
assignment is channel borrowing, in which the channel resources are divided into borrowable and
non-borrowable channel groups [6]-[7]. The non-borrowable group is assigned to a cell in the same
way as FCA. When all of its fixed channels are occupied, a cell borrows channels from its neighbor
cells which have a light traffic load. More recently, a channel borrowing scheme called channel
borrowing without locking (CBWL) is proposed [8], where the C channels of each base station
are divided into seven distinct groups. The C 0 channels of group 0 are reserved for exclusive use
of the given cell. The (C of the other six groups can be borrowed by
the six adjacent cells respectively, one group by one adjacent cell. Each borrowing channel is
used with a limited power level. That is, the borrowed channel is directionally limited as well as
power limited. Therefore, the channel locking for cochannel cells is not necessary.
In this paper, we propose a channel sharing scheme based on a channel sharing pool strategy.
The scheme can adapt to traffic dynamics so that a higher network capacity can be achieved.
The method partitions cell coverage region to eliminate the cochannel interference due to the
dynamic channel sharing; therefore, it does not need any channel locking. In addition, because
the borrowable channels are a portion of total available channels and are shared only among
adjacent cells, the channel sharing management is relatively simple as compared with that of DCA
and HCA. Compared with the CBWL borrowing scheme, the advantage of the newly proposed
scheme is the relaxed constraint on directional borrowing, which results in a higher degree of
traffic adaptation and a lower call blocking probability. This paper is organized as follows. In
Section 2, after studying the cochannel interference issue, we calculate the cochannel interference
spatial margin for cell region partitioning, and then propose the Neighbor Cell Channel Sharing
(NCCS) scheme for traffic adaptive channel assignment. The adjacent channel interference using
the NCCS scheme is also discussed. In Section 3, the call blocking probability using the NCCS
scheme is derived. Numerical analysis results are presented in Section 4, which demonstrate the
performance improvement of the NCCS over that of previously proposed schemes including FCA,
HCA, and CBWL. The conclusions of this work are given in Section 5.
2 The Neighbor Cell Channel Sharing (NCCS) Scheme
A. Cochannel Interference
A cellular network employs distance separation to suppress cochannel interference. Fig. 1 shows
the frequency reuse strategy for a cellular system with frequency reuse factor equal to 7, where
the shadowed cells are the cochannel cells of Cell (17) using the same frequency channels (as an
example). We assume that the received carrier-to-interference ratio (CIR) at a mobile station
(e.g., in Cell (17)) caused by the base stations in the cochannel cells is, on the average, the same
as the CIR at the base station of Cell (17) caused by the mobile stations in the cochannel cells.
The CIR can be calculated by [9]
R
(1)
where R is the radius of each cell, D i is the distant between the interested cell and its ith
cochannel cell, q and fl is a propagation path-loss slope determined by the actual
terrain environment (usually fl is assumed to be 4 for cellular radio systems). In a fully equipped
hexagonal cellular system, there are always six cochannel cells in the first tier. It can be shown
that the interference caused by cochannel cells in the second tier and all other higher-order tiers
is negligible as compared with that caused by the first tier cochannel cells [10]. As a result, if
we consider the cochannel interference only from the cells in the first tier,
in equation (1), where q is a constant.
In order to achieve a probability of at least 90% that any user can achieve satisfactory radio
link quality for voice service, it requires that the CIR value be or higher, which corresponds
to 4. If a channel of Cell (17) is lent to any of its six neighbor cells, then the
cochannel interference to and from any of the six cochannel cells in the first tier may increase.
For example, if a channel of Cell (17) is lent to Cell (24), the distances between cochannel cells
will be D
3:5R. The shortest one is 3R. The channel borrowing of Cell (24) from Cell (17)
reduces the CIR value of the channel in Cell (24) from dB to 16 dB, if R is kept unchanged.
A similar degradation on radio link quality also happens in Cell (38), where the CIR value is
reduced to 17.4 dB. The decrease of the CIR value is due to the decrease of the q i values.
B. Cell Region Partition
From the above discussion, we conclude that any reduction of the q i value due to a channel
borrowing will degrade the radio link quality, because a channel borrowing will result in a decrease
of some D i values. One way to keep the q i value unchanged even with channel borrowing is to
reduce the value of R accordingly when D i is reduced, which can be implemented by reducing the
transmission power. By reducing R to R r borrowed channel can be used
only inside the circle (called the inner-cell region) centered at the base station with radius equal
to R r in all cochannel cells. In other words, each cell is divided into two regions, the inner-cell
region and the rest (called the outer-cell region). For example, if Cell (24) borrows a channel
from Cell (17), with D should be 0.652 in order to ensure that q i - 4:6. Since
the overall CIR value will be greater than Correspondingly, all the C channels
of each base station are also divided into two groups: one consists of N nominal channels to be
used exclusively in the cell (in both the inner-cell and outer-cell regions), and the other consists
of the rest S (= sharing channels to be used in the inner-cell regions of the given cell and
its six neighbor cells. For each cell, there is a pool of sharing channels to be used in its inner-cell
region. The sharing pool consists of all the available sharing channels of the cell and the neighbor
cells.
C. The Proposed Channel Sharing Scheme
In the following, it is assumed that: i) all base stations work in the same condition: omnidirection
antennas are used, and transceivers are available at a given carrier frequency; ii) only cochannel
interference and adjacent channel interference are considered, and all other kinds of noise and
interferences are neglected; and iii) neighbor cells can communicate with each other. Without loss
of generality, Fig. 2 shows the flowchart of the channel assignment for a two-cell network, where
"CH" stands for "channel". Using NCCS, the channel resources in each cell consist of N nominal
channels to serve the users in the whole cell as conventional FCA and S sharing channels to serve
users in the inner-cell regions. When a call connection is requested, a nominal channel will be
assigned to it. In the case that all the nominal channels of the cell are occupied, if the mobile is in
the inner-cell region, then a channel from the sharing pool will be used; otherwise, if the mobile
is in the outer-cell region and there is a mobile in the inner-cell region using a nominal channel,
then an event of channel swapping occurs: the inner-cell mobile switches to a channel from the
sharing pool and gives up its original nominal channel to the new call from the outer-cell region.
The purpose of the channel swapping is to make room for new calls so that the system channel
resources can be fully deployed. Note that when there is no channel available in the sharing pool,
channel swapping may be carried out in a neighbor cell to allow for channel borrowing. The call
will be blocked if i) all the nominal channels are occupied and no channel swapping is possible
when the mobile is in the outer-cell region; ii) all the nominal channels and sharing channels in
the pool are occupied when the mobile is in the inner-cell region. When a connected user moves
from the outer-cell region into the inner-cell region, the transmitters of both mobile terminal
and base station will reduce the transmitting power automatically since the mobile terminal gets
closer to the base station. When an inner-cell mobile terminal using a sharing channel moves
into the outer-cell region, not only the associated power control occurs, but also an intra-cell
handoff happens because the sharing channel cannot be used in the outer-cell region. If there is
no nominal channel available for the intra-cell handoff, the link will be forced to drop.
In implementing the NCCS, each borrowable channel has an "on/off" register in the associated
channel sharing pools to indicate whether the channel is available. During the channel
borrowing from a neighbor cell, the borrowed channel register is turned to "off" in the sharing
pools. For example, when Cell (24) borrows a channel from Cell (17), the channel register
is turned "off" in the sharing pools of Cell (17), Cell (10), Cell (11), Cell (16), Cell (18), and
Cell (25). When the borrowed channel is returned, the register is then turned back to "on" in
the pools. One aspect that should be taken into account is the potential borrowing conflict, that
is, two adjacent cells try to borrow the same channel from the different cells at the same time.
This will violate the channel reuse distance limitation. To prevent the borrowing conflict, some
selective borrowing algorithms should be introduced such as borrowing with ordering [11], borrowing
from the richest [12]. The cell selectivity for borrowing can achieve higher capacity at the
expense of higher complexity. One simple approach is to use directional borrowing restriction. For
example, in Fig. 1, when Cell (24) borrows a channel from Cell (17), it is required that Cell (23)
not borrow the same channel from Cell (22), and Cell (31) and Cell (32) not borrow the same
channel from Cell (38). All other neighbor cells of the six cochannel cells are allowed to borrow
the same channel. In order words, the borrowing restriction is limited only to those cells affected
by the borrowing of Cell (24) from Cell (17). The restriction can be implemented by turning "off"
the register of the channel in the sharing pools of Cell (23), Cell (31) and Cell (32). It should
be mentioned that in the CBWL scheme [8], directional lending is used to avoid the borrowing
conflict, where each cell can borrow up to one sixth of the borrowable channels from its neighbor
cells. As a result, using the NCCS scheme each cell has a much larger channel sharing pool than
that using the CBWL scheme under the same condition of the channel resource arrangement. It
is expected that the NCCS scheme can adapt channel assignment to traffic dynamics to a larger
extent as compared with the CBWL scheme, leading to a lower call blocking probability.
D. Adjacent Channel Interference
Adjacent channel interference is a result of the splatter of modulated RF signals. Because the
mobility of network users, the distance between a mobile terminal and its base station changes
with time. At each moment, some mobile terminals are close to the base station and others
are not. Considering the receiver at the base station, the adjacent channel interference may
not be a problem if the signals from the desired channel and both its adjacent channels are
received with the same power level. The bandpass filter of the receiver should provide adequate
rejection to the interference from the adjacent channels. However, the problems may arise if two
users communicate to the same base station at significantly different transmitting power levels
using two adjacent channels. Signal from the adjacent channel can be stronger than that from
the desired channel to such a degree that the desired signal is dominated by the signal carried
by the adjacent channel. This situation is referred to as "near-far" effect in wireless mobile
communication systems. The larger the difference between the near-far distances, the worse the
adjacent channel interference in radio links. Severe adjacent channel interference may occur when
the difference in the received power levels exceeds the base station receiver's band rejection ratio.
Therefore, channel separations are required, which is primarily determined by the distance ratio,
the path-loss slope fl, and the receiver filter characteristics. The required channel separation, in
terms of channel bandwidth W , is 2
(d a =d b ) fl , L in dB is the
falloff slope outside the passband of the receiver bandpass filter, d a is the distance between the
base station and mobile terminal M a using the desired channel, and d b is the distance between
the base station and mobile terminal M b using one of the adjacent channels. In order to overcome
the adjacent channel interference, FCA achieves channel separation by channel interleaving in
such a way that there is sufficient channel guard band between any two channels assigned to
a base station. For a call connection, the user can just randomly choose any channel with the
strongest signal from all the available channels, without violating the adjacent channel interference
constraint.
With channel borrowing, if a cell has traffic congestion, it will borrow channels from its
neighbor cell(s). A borrowed channel may be located in the channel guard band, which can
introduce excessive interference to the desired signal when the difference between d a and d b is
large. Therefore, two aspects need to be taken into account with channel borrowing: one is
the cochannel interference issue as to whether channel borrowing is allowed; the other is the
adjacent channel interference issue as to whether the borrowed channel can provide satisfactory
link quality. For the NCCS, it has been shown that with dynamic power control, if d a =d b - 16, no
extra channel separation is required between any two channels assigned to the same base station
in order to overcome the adjacent channel interference [10]. If we consider the users, M a and M b ,
both in the inner-cell region, then the requirement d a =d b - 16 is eqivalent to that the radius of the
is the minimal distance between a mobile terminal and the base
station. Under the condition, no channel spacing is needed among the sharing channels used in
the inner-cell regions. With the cell radius R adjacent channel interference
does not affect the channel interleaving and the target radio link quality when channel borrowing
happens in the NCCS operation as long as R 0 - 0:04075R.
Compared with other channel assignment schemes, the NCCS scheme offers the following
advantages: i) it ensures satisfactory link quality (taking into account both cochannel interference
and adjacent channel interference) for both nominal channels and borrowable channels, which
cannot be achieved using directed retry and its enhanced schemes [6]; ii) it does not need global
information and management of channel assignment which is required when using DCA schemes,
resulting in simplicity of implementation; iii) each base station is required to operation on its
nominal channels and the borrowable channels of its sharing pool, which is a much smaller set as
compared with that when using DCA schemes; and iv) no channel locking is necessary, which leads
to a better utilization of the channel resources and a simpler management for channel assignment
as compared with DCA and HCA.
Performance Analysis
In the following performance analysis of the NCCS scheme, we consider that the network operates
on a blocked call cleared (BCC) basis, which means once a call is blocked it leaves the system.
Under the assumption that the number of users is much larger than the number of channels
assigned to a base station, each call arrival is independent of the channel occupancy at the base
station [13]. Handoff calls are viewed as new calls.
A. Basic Modeling
In general, requests for radio channels from mobile users can be modeled as a Poisson arrival
process. The occupancy of radio channels at a base station conventionally is considered as a
"birth and death" process with states f0; is the number of total channels
assigned to the base station. A new call arrival enters the system with a mean arrival rate - and
leaves the system with a mean departure rate -. Defining the traffic density A = -, it can be
derived that the probability of the channel occupancy being at state j is [13]
From equation (2), the probability of radio channel resource congestion (i.e, the call blocking
probability) is
which provides a fundamental measure of the mobile cellular network performance. Equation (3)
is usually referred to as
B. Call Blocking Probability with Channel Sharing
In order to formulate the call blocking probability for the NCCS scheme, we consider the channel
sharing between two adjacent cells without loss of generality. Both base stations are equipped
with the same numbers of nominal channels and borrowable channels respectively. The uniform
topology of this scenario is shown in Fig. 3. For Cell (i) (where
the traffic density in the inner-cell region; y i the traffic density in the outer-cell region; u
Sg the channel occupancy in the inner-cell; and v i 2 f0; 1; 2; :::; ; Ng the channel
occupancy in the outer-cell. We assume that u i and v i are independent random variables. For
each cell, the channel occupancy in the inner-cell region may be different from that in the outer-
cell region, so is the call blocking probability. Let PB (C 1;in ) and PB (C 1;out ) denote the blocking
probability of Cell (1) for inner-cell and outer-cell regions respectively. Due to the channel sharing,
the blocking probabilities of Cell (1) depend on the channel occupancy of the neighbor cell,
Cell (2), which is at one of the following two states:
(I) The Cell (2) channel occupancy is within its C equipped channels. In this case, we say that
Cell (2) is underflow with a 2 f0; unused and borrowable channels. The probability
of Cell (2) being underflow is
(II) The Cell (2) channel occupancy is over its assigned C channels. In this case, we say that
Cell (2) is overflow with b 2 f1; :::Sg borrowed channels from Cell (1). The probability of
Cell (2) being overflow is
It can be verified that P (C
For state (I), the blocking probability of the outer-cell region of Cell (1) (represented by
outer cell (1) for simplicity) is
The call blocking probability of the inner-cell region of Cell (1) (represented by inner-cell (1)),
only conditioned upon Cell (2) channel occupancy, but also upon the
situation of outer-cell (1). If there are n unused channels out of the N nominal channels in
Cell (1), the inner-cell users can use them. Under the assumption that the channel occupancies
in outer cell (1) and Cell (2) are independent, we have
For state (II), the blocking probability of the outer cell (1) is
and the blocking probability of the inner cell (1) is
Using equation (2) to compute P
2, the four conditional probabilities of equations (6)-(9)
can be obtained. Then, using equations (4)-(5) the blocking probabilities of both inner-cell and
outer-cell regions of Cell (1) can be calculated according to the theorem on total probability
C. Effect of User Mobility
Taking user mobility into consideration, the number of mobile terminals in a cell at a given
moment is a random variable. For the two-cell network, the overall traffic load is dynamically
distributed over the two cells. The network is designed in such a way that each cell has a fair
share of resources depending on its traffic load in a long term. However, the traffic load over each
cell is a random process. Let A i ( 4
the total traffic density of Cell (i) for
and 2, and ~
the overall traffic density of the cellular network. Given the number of
subscribers, the traffic of the whole network, ~
A, is a constant. Let A
A and A
where ff 2 [0; 1] is a random variable referred to as a traffic load distributor whose value indicates
the traffic load in Cell (1) and Cell (2). If Cell (1) and Cell (2) are identical (Fig. 3), the traffic
load distributor should have a mean value 0:5. The following relations are considered for
the blocking probability equations: x
(referred to as an interior distributor inside Cell (i)) is a random variable with a mean value of
coverage of inner-cell(i)
coverage of whole 2:
Using these two distributors, the blocking probabilities in equations (10)-(11) can be denoted as
where g(\Delta) and h(\Delta) denote any measurable function. If we assume that mobile terminals are
uniformly distributed in the coverage area and the cell sizes are the same, then ae
Furthermore, the number of terminals in each cell or cell region follows a binomial distribution,
from which we can obtain the joint probability distribution function p(ff; ae) of the distributors
ff and ae. As a result, the blocking probabilities related to the overall traffic density ~
A and the
design parameters N and S are
PB;C 1;outer
PB;C 1;inner
In reality, cellular network service operators will try to achieve service fairness, that is, appropriate
channel resources will be allocated to each base station in order to obtain the same call blocking
probability over all the cells in the service area. Therefore, the system is designed to have
PB;C 1;inner
As a result, the blocking probabilities in terms of the traffic density ~
A and channel resources N
and S is
where p ff (ff i ) is the probability distribution function of ff. The analysis for the two-cell network
can be extended to a multiple-cell network as shown in Fig 1, where for a cell under consideration
all of its 6 neighbor cells can be equivalently modeled by a composite neighbor cell.
4 Numerical Results and Discussion
The numerical analysis in this section is to provide a performance comparison between the NCCS
and other channel assignment schemes. The following assumptions are made in the analysis: i)
All the base stations are equipped with the same numbers of nominal channels and borrowable
Each new call is initiated equally likely from any cell and is independent
of any other calls. Except in the analysis of the bounds of the blocking probability, the
following assumptions are also made: iii) Taking into account the possible borrowing conflict, the
channel sharing pool for each cell consists of available borrowable channels of the cell and four
of its neighbor cells; iv) The traffic loads in all the cells are statistically the same. Under the
assumptions, given the total traffic loads in the network, the traffic load distributor for the cell
under consideration follows a binomial distribution.
Fig. 4 shows the call blocking probabilities of the FCA, HCA [5] and NCCS schemes. In
FCA, each base station has 28 nominal channels; in HCA, each base station has 20 FCA channels
and 8 DCA channels; and in NCCS, In Fig. 4 and all the following figures,
A is the traffic density for each cell. The performance of the FCA scheme is calculated based
on Equation (3), while the performance of the NCCS scheme is based on Equation (17). It is
observed that: i) At a low traffic load, HCA has a much lower blocking rate than FCA; however,
as the traffic load increases, the advantage of HCA over FCA disappears. In fact, HCA may have
a higher blocking probability, due to the necessary DCA channel locking; ii) The NCCS scheme
outperforms the HCA scheme because the NCCS scheme can adapt to traffic dynamics without
channel locking; iii) The NCCS scheme performs much better than the FCA scheme, but the
improvement is reduced as the traffic load increases. This is because with a large value of A, all
the cells tend to be in a congestion state, so that the probability of having any sharing channel
available for borrowing is greatly reduced.
Fig. 5 shows the blocking probabilities of the FCA, CBWL with channel rearrangement [8]
and NCCS schemes. In the CBWL scheme, each base station has 24 channels with C
and 30% of call arrivals can use borrowed channels. In the NCCS scheme,
corresponding to 25% of calls can use borrowed channels. It is observed that
the NCCS scheme has a lower blocking probability than the CBWL scheme, due to the fact that
the CBWL is limited to the directional lending, resulting in a channel sharing pool with much
less borrowable channels as compared with that of the NCCS scheme.
The call blocking probability of the NCCS scheme depends on the traffic load dynamics,
which can be difficult to generalize. In the following, we consider two extreme cases which lead to
the lower and upper bounds on the call blocking probability for the NCCS scheme with channel
sharing among m (= 2; 3; 4; 5; 6; 7) neighbor cells. First, consider the situation where one cell is
a traffic "hot spot" and its neighbor cells have many idle channels, which we refer to as a local
burst situation. The heavily traffic loaded cell can borrow most or all of the sharing channels
from its neighbor cells, resulting in a lower bound of blocking probability for the cell. The other
situation is that all the cells are heavily loaded and no channel sharing is possible, which is
referred to as a global busy situation. If the channel resources, C, in each cell is properly divided
into the nominal channel group and sharing channel group, then the global busy situation results
in the upper bound of the call blocking probability of the NCCS scheme, which is the same as
the call blocking probability of the FCA scheme. Fig. 6 shows the lower and upper bounds of
the call blocking probability of the NCCS scheme with channel sharing among m multiple cells.
Each base station has 15 nominal and 5 sharing channels. It is observed that the lower bound
decreases significantly as m increases, due to an increased number of sharing channels available
in the sharing pool. However, when the traffic density increases, the performance improvement
of the NCCS over FCA (the upper bound) is significantly reduced. Even with all the sharing
channels from the m neighbor cells, it is still possible that the channel resources available to the
cell are not enough to provide service to all the incoming calls in the hot spot.
Conclusions
In this paper, we have developed the neighbor cell channel sharing (NCCS) scheme for wireless
cellular networks. Both cochannel interference and adjacent interference issues regarding the
channel sharing have been discussed. It has been shown that the NCCS scheme achieves a lower
call blocking probability for any traffic load and traffic dynamics as compared with other channel
assignment schemes. The performance improvement is obtained at the expense of additional
intra-cell handoffs. With more neighbor cells in channel sharing, the proposed scheme offers
better traffic handling capacity. The advantages of the proposed scheme include i) that no
channel locking is necessary, and ii) larger channel sharing pools are available due to less strict
constraint on directional borrowing, which lead to both simpler channel resource management
and lower call blocking probability.
Acknowledgements
The authors wish to thank ITRC (the Information Technology Research Center - Center of
Excellence supported by Technology Ontario) for the research grant which supported this work.
--R
"Handover and channel assignment in mobile cellular networks"
"Performance analysis of cellular mobile communication systems with dynamic channel assignment"
"Distributed dynamic channel allocation algorithms with adjacent channel constraints"
"Performance issues and algorithms for dynamic channel assignment"
"A hybrid channel assignment scheme in large-scale cellular-structured mobile communication systems"
"A cellular mobile telephone system with load sharing - an enhancement of directed retry"
"Load sharing sector cells in cellular systems"
"CBWL: a new channel assignment and sharing method for cellular communications systems"
Mobile cellular telecommunication systems.
"Channel assignment and sharing for wireless cellular networks"
"A new frequency channel assignment algorithm in high capacity mobile communication systems"
"Comparisons of channel assignment strategies in cellular mobile telephone systems"
Queueing Systems.
--TR
Mobile Cellular Telecommunications Systems
--CTR
Jean Q.-J. Chak , Weihua Zhuang, Capacity Analysis for Connection Admission Control in IndoorMultimedia CDMA Wireless Communications, Wireless Personal Communications: An International Journal, v.12 n.3, p.269-282, March 2000 | cellular mobile communications;neighbor cell channel sharing;channel assignment |
609461 | Adaptive Modulation over Nakagami Fading Channels. | We first study the capacity of Nakagami multipath fading (NMF) channels with an average power constraint for three power and rate adaptation policies. We obtain closed-form solutions for NMF channel capacity for each power and rate adaptation strategy. Results show that rate adaptation is the key to increasing link spectral efficiency. We then analyze the performance of practical constant-power variable-rate M-QAM schemes over NMF channels. We obtain closed-form expressions for the outage probability, spectral efficiency and average bit-error-rate (BER) assuming perfect channel estimation and negligible time delay between channel estimation and signal set adaptation. We also analyze the impact of time delay on the BER of adaptive M-QAM. | Introduction
The radio spectrum available for wireless services is extremely scarce, while demand for
these services is growing at a rapid pace [1]. Hence spectral efficiency is of primary concern
in the design of future wireless data communications systems. In this paper we first
investigate the theoretical spectral efficiency limits of adaptive transmission in Nakagami
multipath fading (NMF) channels [2]. We then propose and study adaptive multi-level
quadrature amplitude modulation (M-QAM) schemes which improve link spectral efficiency
(R=W [Bits/Sec/Hz]), defined as the average transmitted data rate per unit band-width
for a specified average transmit power and bit-error-rate (BER). We also evaluate
the performance of these schemes relative to the theoretical spectral efficiency limit.
Mobile radio links can exhibit severe multipath fading which leads to serious degradation
in the link carrier-to-noise ratio (CNR) and consequently a higher BER. Fading
compensation such as an increased link budget margin or interleaving with channel coding
are typically required to improve link performance. However, these techniques are
designed relative to the worst-case channel conditions, resulting in poor utilization of the
full channel capacity a good percentage of the time (i.e., under negligible or shallow fading
conditions). Adapting certain parameters of the transmitted signal to the channel
fading leads to better utilization of the channel capacity. The basic concept of adaptive
transmission is real-time balancing of the link budget through adaptive variation of the
March 13, 1998
transmitted power level, symbol transmission rate, constellation size, coding rate/scheme,
or any combination of these parameters [3], [4], [5], [6], [7]. Thus, without wasting power or
sacrificing BER, these schemes provide a higher average link spectral efficiency by taking
advantage of the time-varying nature of wireless channels: transmitting at high speeds under
favorable channel conditions and responding to channel degradation through a smooth
reduction of their data throughput. Good performance of these schemes requires accurate
channel estimation at the receiver and a reliable feedback path between that estimator
and the transmitter. Furthermore since outage probability of such schemes can be quite
high, especially for channels with low average CNR, buffering of the input data may be
required, and adaptive systems are therefore best suited to applications without stringent
delay constraints.
The Shannon capacity of a channel defines its maximum possible rate of data transmission
for an arbitrarily small BER, without any delay or complexity constraints. Therefore
the Shannon capacity represents an optimistic bound for practical communication schemes,
and also serves as a bench-mark against which to compare the spectral efficiency of adaptive
transmission schemes [8]. In [9] the capacity of a single-user flat-fading channel with
perfect channel measurement information at the transmitter and receiver was derived for
various adaptive transmission policies. In this paper we apply the general theory developed
in [9] to obtain closed-form expressions for the capacity of NMF channels under different
adaptive transmission schemes. In particular, we consider three adaptive policies: optimal
simultaneous power and rate adaptation, constant power with optimal rate adaptation,
and channel inversion with fixed rate. We then present numerical results showing that
rate adaptation is the key to achieving high link spectral efficiency. Rate adaptation can
be achieved through a variation of the symbol time duration [3] or constellation size [5].
The former method requires complicated hardware and results in a variable-bandwidth
system, whereas the latter technique is better suited for hardware implementation, since
it results in a variable-throughput system with a fixed bandwidth. Based on these advantages
we analyze the performance of constant-power variable-rate M-QAM schemes for
spectrally efficient data transmission over NMF channels. Similar analysis has been presented
in [6] for a variable-power variable-rate M-QAM in Rayleigh fading and log-normal
March 13, 1998
Feedback Path
Channel
Nakagami Channel
Slowly Varying
Modulator
Input
Transmitter
Channel
Estimator
Constellation Size
Selector
Pilot
Demodulator
Receiver
Data
Output
Data
AGC Carrier
Recovery
Data
Fig. 1. Adaptive communication system model.
shadowing, and in [10] for constant-power variable-rate M-QAM in Rayleigh fading. We
extend the results of [6], [10] to constant-power variable-rate M-QAM by analyzing the
resulting spectral efficiency and BER for the more general NMF distribution. We also
analyze the impact of time delay on the performance of adaptive M-QAM.
The remainder of this paper is organized as follows. In Section II we outline the channel
and communication system models. In Section III we derive the capacity of NMF channels
for the optimal adaptive policy, constant power policy, and channel inversion policy, and
we present some numerical examples comparing (i) the NMF channel capacity with the
capacity of an additive white Gaussian noise (AWGN) channel, and (ii) the NMF channel
capacity for the various adaptive policies. In Section IV we propose and evaluate the
performance of an adaptive constant-power variable-rate M-QAM system assuming perfect
channel estimation and negligible time delay. The BER degradation due to time delay is
analyzed in Section V. A summary of our results is presented in Section VI.
II. System and Channel Models
A. Adaptive Communication System Model
A block diagram of the adaptive communication system is shown in Fig. 1. A pilot
tone continually sends a known "channel sounding" sequence so that the channel-induced
envelope fluctuation ff and phase shift OE can be extracted at the channel estimation stage.
Based on this channel gain estimate -
ff, a decision device selects the rate and power to be
transmitted, configures the demodulator accordingly, and informs the transmitter about
that decision via the feed back path. The constellation size assignment for the proposed
March 13, 1998
constant-power variable-rate M-QAM scheme will be discussed in more detail in Section
IV-A. The transmission system keeps its configuration unchanged (i.e, no re-adaptation)
for a duration - t [s]. Meanwhile the phase estimate -
OE is used at the receiver for full
compensation of the phase variation (i.e., ideal coherent phase detection), whereas the
channel gain estimate -
ff is used on a continuous basis by the automatic gain controller
(AGC)/demodulator for symbol-by-symbol maximum-likelihood detection.
For satisfactory operation the modulator and demodulator must be configurated at any
instant for the same constellation size. Efficient error control schemes are therefore required
to insure an error-free feedback path. However such schemes inevitably introduce a certain
time delay - fb [s], which may include decoding/ARQ delay, and propagation time via the
feedback path. Hence, even if perfect channel estimates are available at the receiver, the
system will not be able to adapt to the actual channel fading but rather to at best a - fb
delayed version of it. In practice, the choice of the power and/or constellation is based on
a channel estimate at time t, but the data are sent over the channel at time t +- such that
is the rate at which we change the constellation size and power.
The goal is to operate with the smallest possible - fb to minimize the impact of feedback
delay, and with the largest possible - t to minimize the rate of system reconfiguration. This
issue will be further discussed in Section V.
B. Channel Model and Fading Statistics
We consider a slowly-varying flat-fading channel changing at a rate much slower than
the symbol data rate, so the channel remains roughly constant over hundreds of symbols.
The multipath fading environment can be characterized by different statistical models.
For NMF channels the probability distribution function (PDF) of the channel gain ff is
given by [2, (11)]
\Omega
\Gamma(m) exp
\Gammam ff 2
\Omega
is the average received power, m is the Nakagami fading parameter
(m - 1=2), and \Gamma(:) is the gamma function [11]. The received CNR, fl, is then gamma
March 13, 1998
distributed according to the PDF, p fl (fl), given by
\Gamma(m) exp
\Gammam
where fl is the average received CNR. The phase OE of the Nakagami fading is uniformly
distributed over [0,2-].
The Nakagami fading represents a wide range of multipath channels via the m fading
parameter [2]. For instance, the Nakagami-m distribution includes the one-sided Gaussian
distribution which corresponds to worst-case fading) and the Rayleigh distribution
special cases. In addition, when m ? 1, a one-to-one mapping between the
Rician factor and the Nakagami fading parameter allows the Nakagami-m distribution to
closely approximate the Rice distribution [2]. Finally, and perhaps most importantly, the
Nakagami-m distribution often gives the best fit to urban [12] and indoor [13] multipath
propagation.
III. Capacity of Nakagami Fading Channels
A. Optimal Adaptation
Given an average transmit power constraint, the channel capacity of a fading channel
with received CNR distribution p fl (fl) and optimal power and rate adaptation (!C ? opra
[Bit/Sec]) is given in [9] as
log 2
where W [Hz] is the channel bandwidth and fl o is the optimal cutoff CNR level below
which data transmission is suspended. This optimal cutoff must satisfy the equation
To achieve the capacity (3), the channel fade level must be tracked at both the receiver and
transmitter, and the transmitter has to adapt its power and rate accordingly, allocating
high power levels and rates for good channel conditions (fl large), and lower power levels
and rates for unfavorable channel conditions (fl small). Since no data is sent when
the optimal policy suffers a probability of outage P out , equal to the probability of no
March 13, 1998
transmission, given by
Substituting (2) in (4) we find that fl must satisfy
where \Gamma(:; :) is the complementary incomplete gamma function [11]. For the special case
of the Rayleigh fading channel reduces to
e \Gammafl
is the exponential integral of first order [11]. Let
Note that df(x)
Moreover, from (8), lim x!0
there is a unique positive x o for which f(x
equivalently, there is a unique fl which satisfies (6). An asymptotic expansion of (6) shows
that as Our numerical results show that fl o increases as fl increases, so
lies in the interval [0,1].
Substituting (2) in (3), and defining the integral J n (-) as
we can rewrite the channel capacity !C? opra as
Jm
The evaluation of J n (-) for n a positive integer is derived in [14, Appendix A]. Using
that result we obtain the NMF channel capacity per unit bandwidth
[Bits/Sec/Hz] under the optimal power and rate adaptation policy as
which can also be written as
denotes the Poisson distribution defined by
For the special case of the Rayleigh fading channel, using (7) in (12) for m=1, the optimal
capacity per unit bandwidth reduces to the simple expression
e \Gammafl
Using (2) in the probability of outage equation (5) yields
B. Constant Transmit Power
With optimal rate adaptation to channel fading with a constant transmit power, the
channel capacity !C? ora [Bits/Sec] becomes [9]
!C? ora was previously introduced by Lee [15], [16] as the average channel capacity of a
flat-fading channel, since it is obtained by averaging the capacity of an AWGN channel
over the distribution of the received CNR. In fact, (16) represents the capacity of the
fading channel without transmitter feedback (i.e. with the channel fade level known at
the receiver only) [17], [18], [19].
Substituting (2) into (16) and defining the integral I n (-) as
I
Z +1t
the channel capacity !C? ora of a NMF channel can be written as
Im
March 13, 1998
The evaluation of I n (-) for n a positive integer is derived in [14, Appendix B]. Using that
result, we can rewrite !C? ora =W [Bits/Sec/Hz] as
One may also express (20) in terms of the Poisson distribution as [16]
Note that Yao and Sheikh [20] provided a closed-form expression for the capacity of NMF
channels in terms of the complementary incomplete gamma function. However their derivation
is different then ours and their resulting expression [20, (7)] contains m order deriva-
tives. For the special case of the Rayleigh fading channel reduces to
C. Channel Inversion with Fixed Rate
The channel capacity when the transmitter adapts its power to maintain a constant CNR
at the receiver (i.e., inverts the channel fading) was also investigated in [9]. This technique
uses fixed-rate modulation and a fixed code design, since the channel after channel
inversion appears as a time-invariant AWGN channel. As a result, channel inversion with
fixed rate is the least complex technique to implement, assuming good channel estimates
are available at the transmitter and receiver. The channel capacity with this technique
(!C? cifr [Bits/Sec]) is derived from the capacity of an AWGN channel and is given in [9]
as
Channel inversion with fixed rate suffers a large capacity penalty relative to the other
techniques, since a large amount of the transmitted power is required to compensate for
the deep channel fades. Another approach is to use a modified inversion policy which
inverts the channel fading only above a fixed cutoff fade depth fl . The capacity with this
truncated channel inversion and fixed rate policy (!C ? tifr [Bits/Sec]) was derived in [9]
to be
R +1
where P out is given by (5). The cutoff level fl o can be selected to achieve a specified outage
probability or, alternatively (as shown in Figures 2, 3, and 4), to maximize (24).
By substituting the CNR distribution (2) in (23) we find that the capacity per unit
bandwidth of a NMF channel with total channel inversion, !C ? cifr =W , is given for all
Thus the capacity of a Rayleigh fading channel zero in this case. Note that
the capacity of this policy for a NMF channel is the same as the capacity of an AWGN
channels with equivalent CNR=
With truncated channel inversion the capacity per unit bandwidth !C? tifr =W [Bits/Sec/Hz]
can be expressed in terms of fl and fl o by substituting (2) into (24), which yields
\Gamma(m; mfl
For the special case of the Rayleigh fading channel 1), the capacity per unit band-width
with truncated channel inversion reduces to
e \Gammafl
D. Numerical Results
Figures
2, 3, and 4 show the capacity per unit bandwidth as a function of fl for a NMF
channel under the three different adaptive policies for
respectively. We see from these figures that the capacity of NMF channels
is always smaller than the capacity of an AWGN channel for fl - 0 dB but converges
to it as the m parameter increases or, equivalently, as the amount of fading decreases.
We also see that optimal power and rate adaptation yields a small increase in capacity
over just optimal rate adaptation, and this small increase in capacity diminishes as the
average received CNR and/or fading parameter m increase. Note finally that fixed rate
transmission with channel inversion suffers the largest capacity penalty. However, this
penalty diminishes as the amount of fading decreases.
Average Received CNR [dB]
Capacity
per
Unit
Bandwidth
AWGN Channel Capacity
Optimal Power and Rate
Optimal Rate and Constant Power
Truncated Channel Inversion
Total Channel Inversion
Fig. 2. Capacity per unit bandwidth for a Rayleigh fading channel (m=1) under different adaption
policies.
IV. Adaptive M-QAM Modulation
A. Proposed Adaptive Schemes
The BER of coherent M-QAM with two-dimensional Gray coding over an additive white
Gaussian noise (AWGN) channel assuming perfect clock and carrier recovery can be well
approximated by [6]
Exact expressions for the BER of "square" M-QAM (when the number of bits per symbol
n is even) are known [21, Chapter 5], and are plotted by the solid lines in Fig. 5. On the
other hand, tight upper-bounds on the BER of "non-square" M-QAM (when the number of
bits per symbol n is odd) are also available [22, p. 283], and are plotted by the cross/solid
lines in Fig. 5. For comparison, the dash lines in this figure show the BER approximation
for different values of M . Note that the approximate BER expression upper bounds
the exact BER for M - 4 and for which is the BER range of interest. We
will use this approximation when needed in our analysis since it is "invertible" in the sense
that it provides a simple closed-form expression for the link spectral efficiency of M-QAM
as a function of the CNR and the BER. In addition, (28) and its inverse are very simple
functions which lead, as shown below, to closed-form analytical expressions and insights
March 13, 1998
ALOUINI AND GOLDSMITH: ADAPTIVE MODULATION OVER NAKAGAMI FADING CHANNELS 11
Average Received CNR [dB]
Capacity
per
Unit
Bandwidth
AWGN Channel Capacity
Optimal Power and Rate
Optimal Rate and Constant Power
Truncated Channel Inversion
Total Channel Inversion
Fig. 3. Capacity per unit bandwidth for a Nakagami fading channel with m=2, and for different adaption
policies.
that are unattainable with more complicated BER expressions.
Assuming ideal Nyquist pulses and given a fixed CNR (fl) and BER (BER 0 ) the spectral
efficiency of continuous-rate M-QAM can be approximated by inverting (28), giving
R
where adaptive continuous rate (ACR) M-QAM scheme responds
to the instantaneous channel CNR fluctuation by varying the number of bits per symbol
according to (29). In the context of this paper, continuous-rate means that the number
of bits per symbol is not restricted to integer values. While continuous-rate M-QAM is
possible [23] it is more practical to study the performance of adaptive discrete rate (ADR)
M-QAM, where the constellation size M n is restricted to 2 n for n a positive integer. In
this case the scheme responds to the instantaneous channel CNR fluctuation by varying its
constellation size as follows. The CNR range is divided into N fading regions, and the
constellation size M n is assigned to the nth region When the received
CNR is estimated to be in the nth region, the constellation size M n is transmitted.
Suppose we set a target BER, BER 0 . The region boundaries (or switching thresholds)
are then set to the CNR required to achieve the target BER 0 using M n -QAM over
March 13, 1998
Average Received CNR [dB]
Capacity
per
Unit
Bandwidth
AWGN Channel Capacity
Optimal Power and Rate
Optimal Rate and Constant Power
Truncated Channel Inversion
Total Channel Inversion
Fig. 4. Capacity per unit bandwidth for a Nakagami fading channel with m=4, and for different adaption
policies.
Carrier-to-Noise-Ratio CNR g [dB]
Bit
Rate
BER
Approximation (3)
Exact
Upper Bound
Fig. 5. BER for M-QAM versus CNR.
Assignment of Constellation Size Relative to Received CNR for a Target BER
Received Carrier-to-Noise-Ratio CNR g [dB]
Constellation
Size
logM
Continuous Rate Adaptive M-QAM (4)
Discrete Rate Adaptive M-QAM
Fig. 6. Number of bits per symbol versus CNR.
an AWGN channel. Specifically
denotes the inverse complementary error function. When the switching
thresholds are chosen according to (30), the system will operate with a BER below the
target BER, as will be confirmed in Section IV-D. Note in particular that all the
are chosen according to (28). Since (28) is an upper-bound of the BER only
for M - 4, fl 1 is chosen according to the exact BER performance of 2-QAM (BPSK). The
thick line in Fig. 6 shows the number of bits per symbol as a function of the received CNR
for ADR M-QAM with 8-regions, along with the corresponding switching thresholds. For
comparison the thin line in this figure shows the bits per symbol of ACR M-QAM.
B. Outage Probability
Since no data is sent when the received CNR falls below fl 1 , the ADR M-QAM scheme
suffers an outage probability, P out , of
Figs. 7 shows the outage probability for various values of the Nakagami fading parameter
and for target BERs of 10 \Gamma3 and 10 \Gamma6 , respectively.
Probability of Outage
Average Received Carrier-to-Noise-Ratio g- [dB]
Probability
of
Outage
out
Target BER=10 -3
Target BER=10 -6
Fig. 7. Outage probability in Nakagami fading.
C. Achievable Spectral Efficiency
Integrating (29) over (2) and following the same steps of Section III-B which obtained
(20), we find the average link spectral efficiency, !R? acr =W , of the ACR M-QAM over
NMF channels as
!R? acr
March 13, 1998
Achievable Rates in Rayleigh Fading (m=1) for a Target BER
Average Received Carrier-to-Noise-Ratio g- [dB]
Spectral
Efficiency
Regions
Regions
Regions
Capacity (Optimal Rate and Constant Power)
Continuous Rate Adaptive M-QAM
Discrete Rate Adaptive M-QAM
Non Adaptive 2-QAM (B-PSK)
Fig. 8. Achievable spectral efficiency for a target BER of 10 \Gamma3 and
The average link spectral efficiency, of the ADR M-QAM over NMF
channels is just the sum of the data rates (log 2 [M n associated with the individual
weighted by the probability a
that the CNR fl falls in
the nth region:
where the a n s can be expressed as
a
Figs. 8, show the average link spectral efficiency of ACR M-QAM (32) and
ADR M-QAM (33) for a target BER
tively. The Shannon capacity using constant-power and variable-rate (20) is also shown
for comparison, along with the spectral efficiency of nonadaptive 2-QAM (BPSK). This
latter efficiency is found by determining the value of the average received CNR for which
the average BER of nonadaptive BPSK over Nakagami fading channel, as given by (38),
equals the target BER. Note that the achievable spectral efficiency of ACR M-QAM comes
within 5 dB of the Shannon capacity limit. ADR M-QAM suffers a minimum additional
1.2 dB penalty, whereas nonadaptive BPSK suffers a large spectral efficiency penalty.
Achievable Rates in Nakagami Fading (m=2) for a Target BER
Average Received Carrier-to-Noise-Ratio g- [dB]
Average
Spectral
Efficiency
Regions
Regions
Regions
Capacity (Optimal Rate and Constant Power)
Continuous Rate Adaptive M-QAM
Discrete Rate Adaptive M-QAM
Non Adaptive 2-QAM (B-PSK)
Fig. 9. Achievable spectral efficiency for a target BER of 10 \Gamma3 and 2.
3013579Achievable Rates in Nakagami Fading (m=4) for a Target BER
Average Received Carrier-to-Noise-Ratio g- [dB]
Average
Spectral
Efficiency
Regions
Regions
Regions
Capacity (Optimal Rate and Constant Power)
Continuous Rate Adaptive M-QAM
Discrete Rate Adaptive M-QAM
Non Adaptive 2-QAM (B-PSK)
Fig. 10. Achievable spectral efficiency for a target BER of 10 \Gamma3 and
D. Average Bit Error Rate
ACR M-QAM always operates at the target BER. However, since the choice of M n in
ADR M-QAM is done in a conservative fashion, this discrete technique operates at an
average smaller than the target BER. This BER can be computed
exactly as the ratio of the average number of bits in error over the total average number
March 13, 1998
of transmitted bits
where
Using (2) and the approximation (28) in (36) BER n can be expressed in closed-form as
(b
where
BER n can also be computed exactly by using the exact expressions for the BER(M n ; fl)
as given in [21, Chapter 5] and [10].
Figs. 11, 12, and 13 show the average BER for ADR M-QAM for a target BER of 10 \Gamma3
and for respectively. The BER calculations based on the
approximation (37) are plotted in solid lines whereas the exact average BERs are plotted
by the star/solid lines. The average BER of nonadaptive BPSK over Nakagami fading
channel is given by
s
denotes the Gauss' hypergeometric function [11]. We plot (38) in Figs.
11, 12, and 13 in dash lines for comparison with (35).
In these figures we observe similar trends in the average BER for various m parameters.
For instance we see that the average BER of ADR M-QAM is always below the 10 \Gamma3 target
BER. Recall that the approximation (28) lower bounds the exact BER for M=2 and that
ADR M-QAM often uses the 2-QAM constellation (B-PSK) at low average CNRs. This
explains why the average BER based on the approximation (37) lower bounds the exact
average BER for Conversely because of the fact that the approximation (28)
upper bounds the exact BER for M ? 2 and because ADR M-QAM often uses the high
constellation sizes at high average CNRs the closed-form approximate average BER for
ADR M-QAM tightly upper-bounds the exact average BER for
March 13, 1998
Average BER in Rayleigh Fading (m=1) for a Target BER
Average Received Carrier-to-Noise-Ratio g- [dB]
Average
Bit
Rate
Regions
Regions
Regions
ACR M-QAM
ADR M-QAM (Approx.)
ADR M-QAM (Exact)
Non-Adaptive 2-QAM
Fig. 11. Average BER for a target BER of 10 \Gamma3 and
M-QAM uses the largest available constellation often when the average CNR is large, the
average BER prediction as fl increases becomes dominated by the BER performance of
that constellation.
V. Impact of Time Delay
Recall from Section II-A that the choice of the constellation size is based on a channel
estimate at time t, whereas the data are sent over the channel at time t - such that
. If a delay of - fb degrades BER significantly, then this adaptive technique
will not work, since - fb is an inherent and unavoidable parameter of the system. However,
if a delay of - fb has a small impact on the BER then we should choose - t as large
as possible so that we meet the BER requirement while minimizing the rate of system
reconfiguration. In this section we analyze the impact of time delay on the performance
of adaptive M-QAM over NMF channels, assuming perfect channel estimates.
Average BER in Nakagami Fading (m=2) for a Target BER
Average Received Carrier-to-Noise-Ratio g- [dB]
Average
Bit
Rate
Regions
Regions
Regions
ACR M-QAM
ADR M-QAM (Approx.)
ADR M-QAM (Exact)
Non-Adaptive 2-QAM
Fig. 12. Average BER for a target BER of 10 \Gamma3 and 2.
A. Fading Correlation
Investigating the impact of time delay requires the second-order statistics for the channel
variation, which are known for Nakagami fading. Let ff and ff - denote the channel gains
at a time t and t respectively. For a slowly-varying channel we can assume that the
average received power remains constant over the time delay -
Under these conditions the joint these two correlated Nakagami-m
distributed channel gains is given by [2, (126)]
\Omega
I
ae)\Omega
exp
ae)\Omega
where I m\Gamma1 (.) is the (m \Gamma 1)th-order modified Bessel function of the first kind [11], and
ae is the correlation factor between ff and ff - . Since Nakagami fading assumes isotropic
scattering of the multipath components, ae can be expressed in terms of the time delay - , the
mobile speed, v [m/s], and the wavelength of the carrier frequency - c [m] as
where J 0 (:) is the zero-order Bessel function of the first kind [11], and f
the maximum Doppler frequency shift [24, p. 31].
The PDF of ff - conditioned on ff, p ff - =ff (ff - =ff), is given by
Average BER in Nakagami Fading (m=4) for a Target BER
Average Received Carrier-to-Noise-Ratio g- [dB]
Average
Bit
Rate
Regions
Regions
Regions
ACR M-QAM
ADR M-QAM (Approx.)
ADR M-QAM (Exact)
Non-Adaptive 2-QAM
Fig. 13. Average BER for a target BER of 10 \Gamma3 and
Inserting (1) and (39) in (40) and expressing the result in terms of the CNRs fl and fl -
yields
ae
I
exp
B. Analysis
B.1 Adaptive Continuous Rate M-QAM
For all delays - let the communication system be configured according to fl (CNR at
such that M(fl) is given by
The constellation size M(fl) is based on the value fl at time t, but that constellation is
transmitted over the channel at time t when fl has changed to fl - . Since M does
not depend on fl - (CNR at time - delay does not affect the link spectral efficiency
as calculated in Section IV-C. However, delay affects the instantaneous BER,
which becomes a function of the "mismatch" between fl - and fl:
Integrating (43) over the conditional PDF (41) yields the average BER conditioned on
fl, BER(fl), as
Inserting (41) and (43) in (44), BER(fl) can be written in a closed-form with the help of
the generalized Marcum Q-function of order m, Qm (:; :) [25, p. 299, (11.63)]
exp
Qm
Using the recurrence relation [25, p. 299, (11.64)]
x
I m (2 p
we get that for all x, Qm (x; which can be shown to equal 1. Therefore
reduces to:
exp
Although this formula was derived for integer m it is also valid for all non-integer values
of m - 1=2. Averaging (47) over the PDF of fl (2) yields the average BER, !BER? acr ,
as
Finally, using (47) in (48) and making the substitution
yields
exp
where
Since this analysis assumes continuous rate adaptation and since M n (fl) - M(fl) for all
fl, (49) represents an upper-bound on the average BER degradation for ADR M-QAM, as
will be confirmed in the following sections.
B.2 Adaptive Discrete Rate M-QAM
The constellation size M n is chosen based on the value of fl according to the ADR M-QAM
scheme described in Section IV-A. However the constellation is transmitted over the
channel when fl has changed to fl - . As in Section V-B-1, we can easily see that the link
spectral efficiency of ADR M-QAM is unaffected by time delay. However, delay affects
!BER? adr , which can be computed as in (35) with BER n replaced by BER 0
Using again the generalized Marcum Q-functions it can be shown that
(b
where
Note that as ae
reduces to BER n (37), as
expected.
C. Numerical Results
Figs. 14 and 15 show adr as a function of the normalized
time delay f D - for different values of the Nakagami m parameter, for a target BER of
respectively. It can be seen from Figs. 14 and 15 that a normalized time
delay up to about 10 \Gamma2 can be tolerated without a noticeable degradation in the average
BER. For example, for a 900 MHz carrier frequency and a target BER of 10 \Gamma3 , a time
delay up to 3.33 ms can be tolerated for pedestrians with a speed of 1 m/s (3.6 km/hr),
and a time delay up to 0.133 ms can be tolerated for mobile vehicles with a speed of 25
m/s (90 km/hr). Comparing Figs. 14 and 15 we see that systems with the lower BER
requirements of 10 \Gamma6 are more sensitive to time delay, as they will suffer a higher "rate of
increase" in BER. For example, in Rayleigh fading, systems with a 10 \Gamma3 BER requirement
suffer about one order of magnitude degardation for f D
systems with a 10 \Gamma6 BER requirement suffer about four order of magnitude degardation
for the same range of f D - . However, in both cases these systems will be able to operate
satisfactorily if the normalized delay is below the critical value of 10 \Gamma2 .
Average BER Degradation due to Time Delay for a Target BER 0 =10 -3
Normalized Time Delay f D t
Average
Bit
Rate
Adaptive Continuous Rate M-QAM
Adaptive Discrete Rate M-QAM
Fig. 14. Average BER vs. normalized time delay for a BER 0 of 10 \Gamma3 , fl=20 dB, and 5 fading regions.
VI. Conclusion
We have studied the capacity of NMF channels with an average power constraint for three
power and rate adaptation policies. We obtain closed-form solutions for NMF channel
capacity for each power and rate adaptation strategy. Our results show that optimal power
and rate adaptation yields a small increase in capacity over just optimal rate adaptation
with constant power, and this small increase in capacity diminishes as the average received
carrier-to-noise ratio, and/or the m parameter increases. Fixed rate transmission with
channel inversion suffers the largest capacity penalty. However, this penalty diminishes as
the amount of fading decreases. Based on these results we conclude that rate rather than
power adaptation is the key to increasing link spectral efficiency. We therefore proposed
and studied the performance of constant-power variable-rate M-QAM schemes over NMF
channels assuming perfect channel estimation and negligible time delay. We determined
their spectral efficiency performance and compared this to the theoretical maximum. Our
March 13, 1998
Average BER Degradation due to Time Delay for a Target BER 0 =10 -6
Normalized Time Delay f D t
Average
Bit
Rate
Adaptive Continuous Rate M-QAM
Adaptive Discrete Rate M-QAM
Fig. 15. Average BER vs. normalized time delay for a BER 0 of 10 \Gamma6 , fl=20 dB, and 5 fading regions.
results show that for a target BER of 10 \Gamma3 , the spectral efficiency of adaptive continuous
rate M-QAM comes within 5 dB of the Shannon capacity limit and adaptive discrete rate
M-QAM comes within 6.2 dB of this limit. We also analyzed the impact of time delay
on the BER of adaptive M-QAM. Results show that systems with low BER requirements
will be more sensitive to time delay but will still be able to operate satisfactorily if the
normalized time delay is below the critical value of 10 \Gamma2 .
--R
"Wireless data communications,"
"The m-distribution- A general formula of intensity distribution of rapid fading,"
"Variable-rate transmission for Rayleigh fading channels,"
"Symbol rate and modulation level controlled adaptive modula- tion/TDMA/TDD for personal communication systems,"
"Variable rate QAM for mobile radio,"
"Variable-rate variable-power M-QAM for fading channels,"
"Adaptive modulation system with variable coding rate concatenated code for high quality multi-media communication systems,"
"Variable-rate coded M-QAM for fading channels,"
"Capacity of fading channels with channel side information,"
"Upper bound performance of adaptive modulation in a slow Rayleigh fading channel,"
Table of Integrals
"A statistical model for urban multipath propagation,"
"Indoor mobile radio channel at 946 MHz: measurements and modeling,"
"Capacity of Rayleigh fading channels under different adaptive transmission and diversity techniques."
"Estimate of channel capacity in Rayleigh fading environment,"
"Comment on "
"Channels with block interference,"
"Information theoretic considerations for cellular mobile radio,"
"A Gaussian channel with slow fading,"
"Evaluation of channel capacity in a generalized fading channel,"
Modern Quadrature Amplitude Modulation.
New York
"Efficient modulation for band-limited channels,"
Special Functions- An Introduction to the Classical Functions of Mathematical Physics
--TR
--CTR
Hong-Chuan Yang , Nesrine Belhaj , Mohamed-Slim Alouini, Performance analysis of joint adaptive modulation and diversity combining over fading channels, Proceeding of the 2006 international conference on Communications and mobile computing, July 03-06, 2006, Vancouver, British Columbia, Canada
Andreas Mller , Joachim Speidel, Adaptive modulation for MIMO spatial multiplexing systems with zero-forcing receivers in semi-correlated Rayleigh fading channels, Proceeding of the 2006 international conference on Communications and mobile computing, July 03-06, 2006, Vancouver, British Columbia, Canada
Qingwen Liu , Shengli Zhou , Georgios B. Giannakis, Cross-layer modeling of adaptive wireless links for QoS support in heterogeneous wired-wireless networks, Wireless Networks, v.12 n.4, p.427-437, July 2006
Dalei Wu , Song Ci, Cross-layer combination of hybrid ARQ and adaptive modulation and coding for QoS provisioning in wireless data networks, Proceedings of the 3rd international conference on Quality of service in heterogeneous wired/wireless networks, August 07-09, 2006, Waterloo, Ontario, Canada
Chengzhi Li , Hao Che , Sanqi Li , Dapeng Wu, A New Wireless Channel Fade Duration Model for Exploiting Multi-User Diversity Gain and Its Applications, Proceedings of the 2006 International Symposium on on World of Wireless, Mobile and Multimedia Networks, p.377-383, June 26-29, 2006
Vegard Hassel , Mohamed-Slim Alouini , Geir E. ien , David Gesbert, Rate-optimal multiuser scheduling with reduced feedback load and analysis of delay effects, EURASIP Journal on Wireless Communications and Networking, v.2006 n.2, p.53-53, April 2006
Dalei Wu , Song Ci, Cross-layer design for combining adaptive modulation and coding with hybrid ARQ, Proceeding of the 2006 international conference on Communications and mobile computing, July 03-06, 2006, Vancouver, British Columbia, Canada | and Nakagami fading;adaptive modulation techniques;link spectral efficiency |
609902 | PAC learning with nasty noise. | We introduce a new model for learning in the presence of noise, which we call the Nasty Noise model. This model generalizes previously considered models of learning with noise. The learning process in this model, which is a variant of the PAC model, proceeds as follows: Suppose that the learning algorithm during its execution asks for m examples. The examples that the algorithm gets are generated by a nasty adversary that works according to the following steps. First, the adversary chooses m examples (independently) according to a fixed (but unknown to the learning algorithm) distribution D as in the PAC-model. Then the powerful adversary, upon seeing the specific m examples that were chosen (and using his knowledge of the target function, the distribution D and the learning algorithm), is allowed to remove a fraction of the examples at its choice, and replace these examples by the same number of arbitrary examples of its choice; the m modified examples are then given to the learning algorithm. The only restriction on the adversary is that the number of examples that the adversary is allowed to modify should be distributed according to a binomial distribution with parameters (the noise rate) and m.On the negative side, we prove that no algorithm can achieve accuracy of > 2 in learning any non-trivial class of functions. We also give some lower bounds on the sample complexity required to achieve accuracy . On the positive side, we show that a polynomial (in the usual parameters, and in 1/(-2)) number of examples suffice for learning any class of finite VC-dimension with accuracy > 2. This algorithm may not be efficient; however, we also show that a fairly wide family of concept classes can be efficiently learned in the presence of nasty noise. | Introduction
Valiant's PAC model of learning [22] is one of the most important models for learning from examples.
Although being an extremely elegant model, the PAC model has some drawbacks. In particular, it
assumes that the learning algorithm has access to a perfect source of random examples. Namely,
upon request, the learning algorithm can ask for random examples and in return gets pairs (x; c t (x))
where all the x's are points in the input space distributed identically and independently according
to some fixed probability distribution D, and c t (x) is the correct classification of x according to the
target function c t that the algorithm tries to learn.
Since Valiant's seminal work, there were several attempts to relax these assumptions, by introducing
models of noise. The first such noise model, called the Random Classification Noise model,
was introduced in [2] and was extensively studied, e.g., in [1, 6, 9, 12, 13, 16]. In this model the
adversary, before providing each example (x; c t (x)) to the learning algorithm tosses a biased coin;
whenever the coin shows "H", which happens with probability j, the classification of the example is
flipped and so the algorithm is provided with the, wrongly classified, example
(stronger) model, called the Malicious Noise model, was introduced in [23], revisited in [17], and
was further studied in [8, 10, 11, 20]. In this model the adversary, whenever the j-biased coin shows
"H", can replace the example (x; c t (x)) by some arbitrary pair any point in the
input space and b is a boolean value. (Note that this in particular gives the adversary the power to
"distort" the distribution D.)
In this work, we present a new model which we call the Nasty (Sample) Noise model. In this
model, the adversary gets to see the whole sample of examples requested by the learning algorithm
before giving it to the algorithm and then modify E of the examples, at its choice, where E is a
random variable distributed by the binomial distribution with parameters j and m, where m is the
size of the sample. This distribution makes the number of examples modified be the same as if it were
determined by m independent tosses of an j-biased coin. However, we allow the adversary's choice
to be dependent on the sample drawn. The modification applied by the adversary can be arbitrary
(as in the Malicious Noise model). 1 Intuitively speaking, the new adversary is more powerful than
the previous ones - it can examine the whole sample and then remove from it the most "informative"
examples and replace them by less useful and even misleading examples (whereas in the Malicious
Noise Model for instance, the adversary also may insert to the sample misleading examples but does
not have the freedom to choose which examples to remove). The relationships between the various
models are shown in Table 1.
Random Noise-Location Adversarial Noise-Location
Label Noise Only Random Classification Noise Nasty Classification Noise
Point and Label Noise Malicious Noise Nasty Sample Noise
Table
1: Summary of models for PAC-learning from noisy data
We argue that the newly introduced model, not only generalizes the previous noise models,
including variants such as Decatur's CAM model [11] and CPCN model [12], but also, that in many
real-world situations, the assumptions previous models made about the noise seem unjustified. For
example, when training data is the result of some physical experiment, noise may tend to be stronger
in boundary areas rather than being uniformly distributed over all inputs. While special models were
We also consider a weaker variant of this model, called the Nasty Classification Noise model, where the adversary
may modify only the classification of the chosen points (as in the Random Classification Noise model).
devised to describe this situation in the exact-learning setting (for example, the incomplete boundary
query model of Blum et al., [5]), it may be regarded as a special case of Nasty Noise, where the
adversary chooses to provide unreliable answers on sample points that are near the boundary of the
target concept (or to remove such points from the sample). Another situation to which our model is
related is the setting of Agnostic Learning. In this model, a concept class is not given. Instead, the
learning algorithm needs to minimize the empirical error while using a hypothesis from a predefined
hypotheses class (see, for example, [18] for a definition of the model). Assuming the best hypothesis
classifies the input up to an j fraction, we may alternatively see the problem as that of learning
the hypotheses class under nasty noise of rate j. However, we note that the success criterion in the
agnostic learning literature is different from the one used in our PAC-based setting.
We show two types of results. Sections 3 and 4 show information theoretic results, and Sect. 5
shows algorithmic results. The first result, presented in Section 3, is a lower bound on the quality of
learning possible with a nasty adversary. This result shows that any learning algorithm cannot learn
any non-trivial concept class with accuracy better than 2j when the sample contains nasty noise of
rate j. We further show that learning a concept class of VC dimension d with accuracy
requires
examples. It is complemented by a matching positive result in Section 4
that shows that any class of finite VC-dimension can be learned by using a sample of polynomial
size, with any accuracy ffl ? 2j. The size of the sample required is polynomial in the usual PAC
parameters and in 1=\Delta where is the margin between the requested accuracy ffl and the
above mentioned lower bound.
The main, quite surprising, result (presented in Section 5) is another positive result showing that
efficient learning algorithms are still possible in spite of the powerful adversary. More specifically, we
present a composition theorem (analogous to [3, 8] but for the nasty-noise learning model) that shows
that any concept class that is constructed by composing concept classes that are PAC-learnable from
a hypothesis class of fixed VC-dimension, is efficiently learnable when using a sample subject to
nasty noise. This includes, for instance, the class of all concepts formed by any boolean combination
of half-spaces in a constant dimension Euclidean space. The complexity here is, again, polynomial
in the usual parameters and in 1=\Delta. The algorithm used in the proof of this result is an adaptation
to our model of the PAC algorithm presented in [8].
Our results may be compared to similar results available for the Malicious Noise model. For
this model, Cesa-Bianchi et al. [10] show that the accuracy of learning with malicious noise is lower
bounded by matching algorithm for learning classes similar to those presented here
with malicious noise is presented in [8]. As for the Random Classification Noise model, learning
with arbitrary small accuracy, even when the noise rate is close to a half, is possible. Again, the
techniques presented in [8] may be used to learn the same type of classes we examine in this work
with Random Classification Noise.
Preliminaries
In this section we provide basic definitions related to learning in the PAC model, with and without
noise. A learning task is specified using a concept class, denoted C, of boolean concepts defined over
an instance space, denoted X . A boolean concept c is a function c : X 7! f0; 1g: The concept class
C is a set of boolean concepts: C ' f0; 1g X .
Throughout this paper we sometimes treat a concept as a set of points instead of as a boolean
function. The set that corresponds to a concept c is simply 1g. We use c to denote both
the function and the corresponding set interchangeably. Specifically, when a probability distribution
D is defined over X , we use the notation D(c) to refer to the probability that a point x drawn from
X according to D will have
2.1 The Classical PAC Model
The Probably Approximately Correct (PAC) model was originally presented by Valiant [22]. In this
model, the learning algorithm has access to an oracle PAC that returns on each call a labeled example
according to a fixed distribution D over X , unknown
to the learning algorithm, and c t 2 C is the target function the learning algorithm should "learn".
Definition 1: A class C of boolean functions is PAC-learnable using hypothesis class H in polynomial
time if there exists an algorithm that, for any c t 2 C, any input parameters
and any distribution D on X , when given access to the PAC oracle, runs in time polynomial in log jX j,
1=ffi, 1=ffl and with probability at least outputs a function h 2 H for
Pr
2.2 Models for Learning in the Presence of Noise
Next, we define the model of PAC-learning in the presence of Nasty Sample Noise (NSN for short).
In this model, a learning algorithm for the concept class C is given access to an (adversarial) oracle
NSN C;j (m). The learning algorithm is allowed to call this oracle once during a single run. The
learning algorithm passes a single natural number m to the oracle, specifying the size of the sample
it needs, and gets in return a labeled sample S 2 (X \Theta f0; 1g) m . (It is assumed, for simplicity, that
the algorithm knows in advance the number of examples it needs; the extension of the model for
scenarios where such a bound is not available in advance is given in Section 6.)
The sample required by the learning algorithm is constructed as follows: As in the PAC model,
a distribution D over the instance space X is defined, and a target concept c t 2 C is chosen. The
adversary then draws a sample S g of m points from X according to the distribution D. Having
full knowledge of the learning algorithm, the target function c t , the distribution D, and the sample
drawn, the adversary chooses points from the sample, where E(S g ) is a random variable.
The E points chosen by the adversary are removed from the sample and replaced by any other
point-and-label pairs by the adversary. The not chosen by the adversary remain
unchanged and are labeled by their correct labels according to c t . The modified sample of m points,
denoted S, is then given to the learning algorithm. The only limitation that the adversary has on
the number of examples that it may modify is that it should be distributed according to the binomial
distribution with parameters m and j, namely:
where the probability is taken by first choosing S g 2 D m and then choosing E according to the
corresponding random variable E(S g ).
Definition 2: An algorithm A is said to learn a class C with nasty sample noise of rate j - 0 with
accuracy parameter ffl ? 0 and confidence parameter access to any oracle NSN C;j (m),
for any distribution D and any target c t 2 C it outputs a hypothesis h : X 7! f0; 1g such that, with
probability at least
Pr
We are also interested in a restriction of this model, which we call the Nasty Classification Noise
learning model (NCN for short). The only difference between the NCN and NSN models is that the
NCN adversary is only allowed to modify the labels of the E chosen sample-points, but it cannot
modify the E points themselves. Previous models of learning in the presence of noise can also
be readily shown to be restrictions of the Nasty Sample Noise model: The Malicious Noise model
corresponds to the Nasty Noise model with the adversary restricted to introducing noise into points
that are chosen uniformly at random, with probability j, from the original sample. The Random
Classification Noise model corresponds to the Nasty Classification Noise model with the adversary
restricted so that noise is introduced into points chosen uniformly at random, with probability j,
from the original sample, and each point that is chosen gets its label flipped.
2.3 VC Theory Basics
The VC-dimension [24], is widely used in learning theory to measure the complexity of concept
classes. The VC-dimension of a class C, denoted VCdim(C), is the maximal integer d such that there
exists a subset Y ' X of size d for which all 2 d possible behaviors are present in the class C, and
if such a subset exists for any natural d. It is well known (e.g., [4]) that, for any two
classes C and H (over X ), the class of negations fcjX n c 2 Cg has the same VC-dimension as the class
C, and the class of unions fc [ hjc 2 C; h 2 Hg has VC-dimension at most VCdim(C)+VCdim(H)+1.
Following [3] we define the dual of a concept class:
Definition 3: The dual H ? ' f0; 1g H of a class H ' f0; 1g X is defined to be the set
defined by x ?
If we view a concept class H as a boolean matrix where each row represents a concept and each
column a point from the instance space, X , then the matrix corresponding to H ? is the transpose
of the matrix corresponding to H. The following claim, from [3], gives a tight bound on the VC
dimension of the dual class:
1: For every class H,
log
In the following discussion we limit ourselves to instance spaces X of finite cardinality. The main
use we make of the VC-dimension is in constructing ff-nets. The following definition and theorem
are from [7]:
Definition 4: A set of points Y ' X is an ff-net for the concept class H ' f0; 1g X under the
distribution D over X , if for every h 2 H such that D(h) - ff, Y " h 6= ;.
Theorem 1: For any class H ' f0; 1g X of VC-dimension d, any distribution D over X , and any
ae 4
ff
log 2
ff
log 13
ff
oe
examples are drawn i.i.d. from X according to the distribution D, they constitute an ff-net for H
with probability at least 1 \Gamma ffi.
In [21], Talagrand proved a similar result:
Definition 5: A set of points Y ' X is an ff-sample for the concept class H ' f0; 1g X under the
distribution D over X , if it holds that for every h 2 H:
Theorem 2: There is a constant c 1 , such that for any class H ' f0; 1g X of VC-dimension d, and
distribution D over X , and any ff ? 0,
examples are drawn i.i.d. from X according to the distribution D, they constitute an ff-sample for
H with probability at least
2.4 Consistency Algorithms
Let P and N be subsets of points from X . We say that a function h : X 7! f0; 1g is consistent on
"positive point" x 2 P and "negative point" x 2 N .
A consistency algorithm (see [8]) for a pair of classes (C; H) (both over the same instance space X ,
with C ' H), receives as input two subsets of the instance space, runs in time t(jP [ N j),
and satisfies the following. If there is a function in C that is consistent with (P; N ), the algorithm
outputs "YES" and some h 2 H that is consistent with (P; N ); the algorithm outputs "NO" if no
consistent exist (there is no restriction on the output in the case that there is a consistent
function in H but not in C).
Given a subset of points of the instance space Q ' X , we will be interested in the set of all
possible partitions of Q into positive and negative examples, such that there is a function h 2 H
and a function c 2 C that are both consistent with this partition. This may be formulated as:
CON is a consistency algorithm for (C; H).
The following is based on Sauer's Lemma [19]:
Lemma 1: For any set of points Q,
Furthermore, an efficient algorithm for generating this set of partitions (along with the corresponding
functions h presented, assuming that C is PAC-learnable from H of constant VC dimension.
The algorithm's output is denoted
h is consistent with
Information Theoretic Lower Bound
In this section we show that no learning algorithm (not even inefficient ones) can learn a "non-
trivial" concept class with accuracy ffl better than 2j under the NSN model; in fact, we prove that
this impossibility result holds even for the NCN model. We also give some results on the size of
samples required to learn in the NSN model with accuracy ffl ? 2j.
Definition class C over an instance space X is called non-trivial if there exist two
points
Theorem 3: Let C be a non-trivial concept class, j be a noise rate and ffl ! 2j be an accuracy
parameter. Then, there is no algorithm that learns the concept class C with accuracy ffl under the
NCN model (with rate j).
Proof: We base our proof on the method of induced distributions introduced in [17, Theorem
1]. We show that there are two concepts distribution D such that
and an adversary can force the labeled examples shown to the learning algorithm
to be distributed identically both when c 1 is the target and when c 2 is the target.
Let c 1 and c 2 be the two concepts whose existence is guaranteed by the fact that C is a non-trivial
class, and let x be the two points that satisfy c 1
define the probability distribution D to be D(x 1
g. Clearly, we indeed have PrD [c 1
Now, we define the nasty adversary strategy (with respect to the above probability distribution
D). Let m be the size of the sample asked by the learning algorithm. The adversary starts by drawing
a sample S g of m points according to the above distribution. Then, for each occurrence of x 1 in the
sample, the adversary labels it correctly according to c t , while for each occurrence of x 2 the adversary
tosses a coin and with probability 1=2 it labels the point correctly (i.e., c t
it flips the label (to )). The resulted sample of m examples is given by the adversary
to the learning algorithm. First, we argue that the number of examples modified by the adversary
is indeed distributed according to the binomial distribution with parameters j and m. For this, we
view the above adversary as if it picks independently m points and for each of them decides (as
above) whether to flip its label. Hence, it suffices to show that each example is labeled incorrectly
with probability j independently of other examples. Indeed, for each example independently, its
probability of being labeled incorrectly equals the probability of choosing x 2 according to D times
the probability that the adversary chooses to flip the label on an x 2 example; i.e. 2j \Delta
as needed. (We emphasize that the binomial distribution is obtained because D is known to the
adversary.)
Next observe that, no matter whether the target is c 1 or c 2 , the examples given to the learning
algorithm (after being modified by the above nasty adversary) are distributed according to the
following probability distribution:
Therefore, according to the sample that the learning algorithm sees, it is impossible to differentiate
between the case where the target function is c 1 and the case where the target function is c 2 .
Note that in the above proof we indeed take advantage of the "nastiness" of the adversary. Unlike
the malicious adversary, our adversary can focus all its "power" on just the point x 2 , causing it to
suffer a relatively high error rate, while examples in which the point is x 1 do not suffer any noise. We
also took advantage of the fact that E (the number of modified examples) is allowed to depend on
the sample (in our case, it depends on the number of times x 2 appears in the original sample). This
allows the adversary to further focus its destructive power on samples which were otherwise "good"
for the learning algorithm. Finally, since any NCN adversary is also a NSN adversary, Theorem 3
implies the following:
Corollary 4: Let C be a non-trivial concept class, j ? 0 be the noise rate, and ffl ! 2j be an
accuracy parameter. There is no algorithm that learns the concept class C with accuracy ffl under
the NSN model, with noise rate j.
Once we have settled 2j as the lower bound on the accuracy possible with a nasty adversary with
error rate j, we turn to the question of the number of examples that are necessary to learn a concept
class with some accuracy Again, in this section we are only considering
information-theoretic issues. These results are similar to those presented by Cesa-Bianchi et al. [10]
for the Malicious Noise model. Note, however, that the definition of the margin \Delta used here is
relative to a lower bound different than the one used in [10]. In the proofs of these results, we use
the following claim (see [10]) that provides a lower bound on the probability that a random variable
of binomial distribution deviates from its expectation by more than the standard deviation:
2: [10, Fact 3.2] Let SN;p be a random variable distributed by the binomial distribution with
parameters N and p, and let p. For all N ? 37=(pq):
Pr
ki
19 (1)
Pr
ki
19 (2)
Theorem 5: For any nontrivial concept class C, any noise rate j ? 0, confidence parameter
the sample size needed for PAC learning C with accuracy
and confidence tolerating nasty classification noise of rate j is at least
=\Omega
Proof: Let c 1 and c 2 be the two concepts whose existence is guaranteed by the fact that C is a non-trivial
class, and let x be the two points that satisfy c 1
Let us define a distribution D that gives weight ffl to the point x 2 and weight 1 \Gamma ffl to x 1 , making
by f the target function (either c 1 or c 2 ).
The Nasty Classification Adversary will use the following strategy: for each pair of the form
in the sample, with probability j=ffl reverse the label (i.e., present to the learning algorithm
the pair instead). The rest of the sample (all the pairs of the form
unmodified. Note that for each of the m examples the probability that its classification is changed is
therefore exactly ffl \Delta j
so the number of points that suffer noise is indeed distributed according
to the binomial distribution with parameters j and m. The induced probability distribution on the
sample that the learning algorithm sees is:
For contradiction, let A be a (possibly randomized) algorithm that learns C with accuracy ffl using
a sample generated by the above oracle and whose size is m
by p A (m) the error of the hypothesis h that A outputs when using m examples. Let B be the
Bayes strategy of outputting c 1 if the majority instances of x 2 are labeled c 1
Clearly, this strategy minimizes the probability of choosing the wrong hypothesis. This implies
Define the following two events over runs of B on samples of size m: Let N denote the number
of examples in m showing the point x 2 . BAD 1 is the event that at least dN=2e are
corrupted, and BAD 2 is the event that N - 36j(j
answer incorrectly, as there will be more examples showing x 2 with the wrong label than there will
be examples showing x 2 with the correct label.
Examine now the probability that BAD 2 will occur. Note that N is a random variable distributed
by the binomial distribution with parameters m and ffl (and recall that \Delta). We are interested
in:
Since the probability that N is large is higher when m is larger, and m is upper bounded by (17j(1 \Gamma
But this may be bounded, using Hoeffding's inequality to be at least
We therefore have:
On the other hand, if we assume that BAD 2 holds, namely that N - 36j(j+
additionally assume that N - 37(2j+ \Delta) 2 =(j(j+ \Delta)) then, by Claim 2 (with
and using the following inequality
$s
it follows that Pr[BAD 1 To see that the inequality of Equation (3) indeed holds
when BAD 2 holds, note that (3) is implied by:
s
which is, in turn, implied by the two conditions:2
s
s
It can be verified that these two conditions hold if we take N to be in the range we assume: 2
is an optimal strategy, and hence no
worse than a strategy that ignores some of the sample points, its error can only decrease if more
points are shown for x 2 . Therefore, the same results will hold if we remove the lower bound on N .
We thus have that Pr[BAD 1
A second type of a lower bound on the number of required examples is based on the VC dimension
of the class to be learned, and is similar to the results (and the proof techniques) of [7] for the standard
PAC model:
2 By our conditions on \Delta there must be at least one integer in the range we assume for N .
Theorem 6: For any concept class C with VC-dimension d - 3, and for any 0 ! ffl - 1=8,
1=12, the sample size required to learn C with accuracy ffl and confidence ffi when using
samples generated by a nasty classification adversary with error rate \Delta) is greater than
=\Omega
be a set of d points shattered by C. Define a probability
distribution D as follows:
Assume for contradiction that at most (d \Gamma 2)=32\Delta examples are used by the learning algorithm. We
let the nasty adversary behave as follows: it reverses the label on each example x d\Gamma1 with probability
1=2 (independent of any other sample points), making the labels of x d\Gamma1 appear as if they are just
random noise (also note that the probability of each example to be corrupted by the adversary is
exactly 2j \Delta j). Thus, with probability 1=2 the point x d\Gamma1 is misclassified by the learner's
hypothesis. The rest of the sample is left unmodified. Denote by BAD 1 the event that at least
half of the points x are not seen by the learning algorithm. Given BAD 1 , we denote
UP the set of (d \Gamma 2)=2 unseen points with lowest indices, and define BAD 2 as the event that the
algorithm's hypothesis misclassifies at least (d \Gamma 2)=8 points from UP. Finally, let BAD 3 denote
the event that x d\Gamma1 is misclassified. It is easy to see that BAD 1 - BAD 2 - BAD 3 imply that the
hypothesis has error of at least ffl, as it implies that the hypothesis errs on (d \Gamma 2)=8 points where
each of these points has weight 8\Delta=(d \Gamma 2) and on the point x whose weight is 2j, making the
total error at least \Delta Therefore, if an algorithm A can learn the class with confidence ffi ,
it must hold that Pr[BAD 1 - BAD 2 - BAD 3 As noted before, x d\Gamma1 appears to be labeled by
random noise, and hence Pr[BAD 3
is independent of BAD 1 and BAD 2 , thus
.
As for the other events, since at most (d \Gamma 2)=32\Delta examples are seen, the expected number of
points from x that the learning algorithm sees is at most (d \Gamma 2)=4. From the Markov
inequality it follows that, with probability at least 1
2 , no more than (d \Gamma 2)=2 points are seen. Hence,
2 . Every unseen point will be misclassified by the learning algorithm with probability
at least half (since for each such point, the adversary may set the target to label the point with
the label that has lower probability to be given by the algorithm). Thus Pr[BAD 2 jBAD 1 ] is the
probability that a fair coin flipped (d \Gamma 2)=2 times shows heads at least (d \Gamma 2)=8 times. Using [10,
Fact 3.3] this probability can be shown to be at least 1=3. We thus have:
This completes the proof.
Since learning with Nasty Sample Noise is not easier than learning with Nasty Classification
Noise, the results of Theorems 5 and 6 also hold for learning from a Nasty Sample Noise oracle.
Information Theoretic Upper Bound
In this section we provide a positive result that complements the negative result of Section 3. This
result shows that, given a sufficiently large sample, any hypothesis that performs sufficiently well on
the sample (even when this sample is subject to nasty noise) satisfies the PAC learning condition.
Formally, we analyze the following generic algorithm for learning any class C of VC-dimension d,
whose inputs are a certainty parameter ffi ? 0, the nasty error rate parameter
2 and the required
Algorithm NastyConsistent:
1. Request a sample
2. Output any h 2 C such that
(if no such h exists, choose any h 2 C arbitrarily).
Theorem 7: Let C be any class of VC-dimension d. Then, (for some constant c) algorithm Nasty-
Consistent is a PAC learning algorithm under nasty sample noise of rate j.
In our proof of this theorem (as well as in the analysis of the algorithm in the next section), we
use, for convenience, a slightly weaker definition of PAC learnability than the one used in Definition
1. We require the algorithm to output, with probability at least
Pr
(rather than a strict inequality). However, if we use the same algorithm but give it a slightly smaller
accuracy parameter (e.g., ffl ffl), we will get an algorithm that learns using the original
criterion of Definition 1.
Proof: First, we argue that with "high probability" the number of sample points that are modified
by the adversary is at most m(j \Delta=4). As the random variable E is distributed according to the
binomial distribution with expectation jm, we may use Hoeffding's inequality [14] to get:
Pr
(by the choice of c), this event happens with probability of at most ffi=2.
Now, we note that the target function c t , errs on at most E points of the sample shown to
the learning algorithm (as it is completely accurate on the non-modified sample S g ). Thus, with
probability at least NastyConsistent will be able to choose a function h 2 C that
errs on no more that (j + \Delta=4)m points of the sample shown to it. However, in the worst case, these
errors of the function h occur in points that were not modified by the adversary. In addition, h may
be erroneous for all the points that the adversary did modify. Therefore, all we are guaranteed in
this case, is that the hypothesis h errs on no more that 2E points of the original sample S g . By
Theorem 2, there exists a constant c such that, with probability by taking S g to be of size
at least c
the resulting sample S g is a \Delta
-sample for the class of symmetric differences
between functions from C. By the union bound we therefore have that, with probability at least
\Delta=4)m, meaning that jS \Delta=2)m, and that S g is a \Delta=2-sample
for the class of symmetric differences, and so:
Pr
as required.
5 Composition Theorem for Learning with Nasty Noise
Following [3] and [8], we define the notion of "composition class": Let C be a class of boolean
functions . Define the class C ? to be the set of all boolean functions F (x) that can
be represented as f(g 1 boolean function, and g i 2 C for
We define the size of f(g to be k. Given a vector of hypotheses
following [8], the set W(h to be the set of sub-domains W a ag for
all possible vectors a 2 f0; 1g t .
We now show a variation of the algorithm presented in [8] that can learn the class C ? with a
nasty sample adversary, assuming that the class C is PAC-learnable from a class H of constant VC
dimension d. Our algorithm builds on the fact that a consistency algorithm CON for (C; H) can be
constructed, given an algorithm that PAC learns C from H [8]. This algorithm can learn the concept
class C ? with any confidence parameter ffi and with accuracy ffl that is arbitrarily close to the lower
bound of 2j, proved in the previous section. Its sample complexity and computational complexity
are both polynomial in k, 1=ffi and 1=\Delta, where
Our algorithm is based on the following idea: Request a large sample from the oracle. Randomly
pick a smaller sub-sample from the sample retrieved. By randomly picking this sub-sample, the
algorithm neutralize some of the power the adversary has, since the adversary cannot know which
examples are the ones that will be most "informative" for us. Then use the consistency algorithm
for (C; H) to find one representative from H for any possible behavior on the smaller sub-sample.
These hypotheses from H now define a division of the instance space into "cells", where each cell
is characterized by a specific behavior of all the hypotheses picked. The final hypotheses is simply
based on taking a majority vote among the complete sample inside each such cell.
To demonstrate the algorithm, let us consider (informally) the specific, relatively simple, case
where the class to be learned is the class of k intervals on the straight line (see Figure 1). The
algorithm, given a sample as input, proceeds as follows:
1. The algorithm uses a relatively small, random sub-sample to divide the line into sub-intervals.
Each two adjacent points in the sub-sample define such a sub-interval.
2. For each such sub-interval the algorithm calculates a majority vote on the complete sample.
The result is our hypothesis.
The number of points (which in this specific case is the number of sub-intervals) that the algorithm
chooses in the first step depends on k. Intuitively, we want the total weight of the sub-intervals
containing the target's end-points to be relatively small (this is what is called the ``bad part'' in the
formal analysis that follows). Naturally, there will be 2k such "bad" sub-intervals, so the larger k
Target Concept:
Sub-sample and intervals:
"Bad" "Bad" "Bad" "Bad"
Algorithm's hypothesis:
Figure
1: Example of NastyLearn for intervals.
is, the larger the sub-sample needed. Except for these "bad" sub-intervals, all other subintervals on
which the algorithm errs have to have at least half of their points modified by the adversary. Thus
the total error will be roughly 2j, plus the weight of the "bad" sub-intervals.
Now, we proceed to a formal description of the learning algorithm. Given the constant d, the size
k of the target function, the bound on the error rate j, the parameters ffi and \Delta, and two additional
parameters M;N (to be specified below), the algorithm proceeds as follows:
Algorithm NastyLearn:
1. Request a sample S of size N .
2. Choose uniformly at random a sub-sample R ' S of size M .
3. Use the consistency algorithm for (C; H) to compute
4. Output the hypotheses H(h computed as follows: For any W a 2 W(h
is not empty, set H to be the majority of labels in S " W a . If W a is empty, set H to be 0 on
any x 2 W a .
Theorem 8: Let
log 8
log 78k
are constants. Then, Algorithm NastyLearn learns the class C ? with accuracy
confidence ffi in time polynomial in k, 1
As in Theorem 7, this Theorem refers to the modified PAC criterion that require the algorithm
to output, with probability at least 1 \Gamma ffi, a function h for
Pr
The same technique we mentioned for Algorithm NastyConsistent may be used to modify this algorithm
to be a PAC learning algorithm in the sense of Definition 1.
Before commencing with the actual proof, we present a technical lemma:
Lemma 2: Assuming N is set as in the statement of Theorem 8, with probability at least
4 the
number of points in which errors are introduced, E, is at most (j + \Delta=12)N .
Proof of Lemma 2: Note that, by the definition of the model, E is distributed according to a
binomial distribution with parameters j and N . Thus, E behaves as the number of successes in
independent Bernoulli experiments, and the Hoeffding inequality [14] may be used to bound its
value:
Pr
Therefore, if we take N ? 72
ln(4=ffi) we have, with probability at least
4 , that E is at most
(j \Delta=12)N . Note that the value we have chosen for N in the statement of Theorem 8 is clearly
large enough.
We are now ready to present the proof of Theorem 8:
Proof: To analyze the error made by the hypothesis that the algorithm generates, let us denote
the adversary's strategy as follows:
1. Generate a sample of the requested size N according to the distribution D, and label it by the
target concept F . Denote this sample by S g .
2. Choose a subset S out ' S g of size is a random variable (as defined in
Section 2.2).
3. Choose (maliciously) some other set of points S in ' X \Theta f0; 1g of size E.
4. Hand to the learning algorithm the sample in .
Assume the target function F is of the form
the hypothesis that the algorithm have chosen in step 3 that exhibits the
same behavior g i has over the points of R (from the definition of -
SCON we are guaranteed that such
a hypothesis exists). By definition, there are no points from R in h j i
As the VC-dimension of both the class C of all g i 's and the class H of all h i 's is d, the class of
all their possible symmetric differences also has VC-dimension O(d) (see Section 2.3). By applying
Theorem 1, when viewing R as a sample taken from S according to the uniform distribution, and
by choosing M to be as in the statement of the theorem, R will be an ff-net (with respect to the
uniform distribution over S) for the class of symmetric differences, with
at least 1 \Gamma ffi=4. Note that there may still be points in S which are in h j i 4g i . Hence, we let
using (4) we get:
with probability at least 1 \Gamma ffi=4, simultaneously for all i.
For every sub-domain B 2 W(h
N in
B= jS in " Bj
In words, NB and N in
simply stand for the size of the restriction of the original (noise-free) sample
S g and the noisy examples S in introduced by the adversary to the sub-domain B. As for the rest
of the definitions, they are based on the distinction between the "good" part of B, where the g i s
and the h j i s behave the same, and the "bad" part, which is present due to the fact that the g i s and
the h j i s exhibit the same behavior only on the smaller sub-sample R, rather than on the complete
sample S. We use N ff
B to denote the number of sample points in the bad part of B, and N out,g
B to denote the number of sample points that were removed by the adversary from the good
and bad parts of B, respectively.
Since our learning algorithm decides on the classification in each sub-domain by a majority vote,
the hypothesis will err on the domain
of B) if the number of
examples left untouched in B is less than the number of examples in B that were modified by the
adversary, plus those that were misclassified by the h j i s (with respect to the g i s). This may be
formulated as the following condition: N in
Therefore, the total error the algorithm may experience is at most:
B: NB-N in
We now calculate a bound for each of the two terms above separately. To bound the second term,
note that by Theorem 2 our choice of N guarantees S g to be a \Delta
-sample for our domain
with probability at least 1 \Gamma ffi=4. Note that from the definition of W(h and from the Sauer
Lemma [19] we have that jW(h
Our choice of N indeed guarantees, with probability at least
B: NB-N in
B: NB-N in
N in
?From the above choice of N , it follows that S g is also a \Delta
-sample for the class of
symmetric differences of the form h j i
4g i . Thus, with probability at least 1 \Gamma ffi=4, we have:
The total error made by the hypothesis (assuming that none of the four bad events happen) is
therefore bounded by:
Pr
N in
as required. This bound holds with certainty at least 1 \Gamma ffi.
6 Conclusion
We have presented the model of PAC learning with nasty noise, generalizing on previous models. We
have proved a negative information-theoretic result, showing that there is no learning algorithm that
can learn any non-trivial class with accuracy better than 2j, paired with a positive result showing this
bound to be tight. We complemented these results with lower bounds on the sample size required for
learning with accuracy 2j \Delta. We have also shown that for a wide variety of "interesting" concept
classes, an efficient learning algorithm in this model exists. Our negative result can be generalized
for the case where the learning algorithm uses randomized hypotheses, or coin rules (as defined in
[10]); in such a case we get an information theoretic lower bound of j for the achievable accuracy,
compared to a lower bound of j=2(1 proved in [10] for learning with Malicious Noise of rate j.
While the partition into two separate variants: the NSN and the NCN models seem intuitive and
well-motivated, it remains an open problem to come up with any results that actually separate the
two models. Both the negative and positive results we presented in this work apply equally to both
the NSN and the NCN models.
Finally, note that the definition of the nasty noise model requires the learning algorithm to know
in advance the sample size m (or an upper bound on it). The model however can be extended so as
to deal with scenarios where no such bound is known to the learning algorithm. There are several
scenarios of this kind. For example, the sample complexity may depend on certain parameters (such
as the "size" of the target function) which are not known to the algorithm 3 . The adversary, who knows
the learning algorithm and knows the target function (and in general can know all the parameters
hidden from the learning algorithm) can thus "plan ahead" and draw in advance a sample S g of size
which is sufficiently large to satisfy, with high probability, all the requests the learning algorithm will
make. It then modifies S g as defined above and reorders the resulted sample S randomly 4 . Now, the
learning algorithm simply asks for one example at a time (as in the PAC model) and the adversary
supplies the next example in its (randomly-ordered) set S. If the sample is exhausted (which may
happen in those cases where we have only an expected sample-complexity guarantee), we say that
the learning algorithm has failed; however, when using a large enough sample (with respect to 1=ffi),
this will happen with sufficiently small probability.
--R
"General Bounds on Statistical Query Learning and PAC Learning with Noise via Hypothesis Boosting"
"Learning from Noisy Examples"
"A Composition Theorem for Learning Algorithms with Applications to Geometric Concept Classes"
"Combinatorial Variability of Vapnik-Chervonenkis Classes with Applications to Sample Compression Schemes"
"Learning with Unreliable Boundary Queries"
"Weakly Learning DNF and Characterizing Statistical Query Learning Using Fourier Analysis"
"Learnability and the Vapnik-Chervonenkis dimension"
"A New Composition Theorem for Learning Algorithms"
"Noise-Tolerant Distribution-Free Learning of General Geometric Concepts"
"Sample-efficient strategies for learning in the presence of noise"
"Learning in Hybrid Noise Environments Using Statistical Queries"
"PAC Learning with Constant-Partition Classification Noise and Applications to Decision Tree Induction"
"On Learning from Noisy and Incomplete Examples"
"Probability Inequalities for Sums of Bounded Random Variables"
"Efficient Noise-Tolerant Learning from Statistical Queries"
"Learning in the Presence of Malicious
"Toward Efficient Agnostic Learning"
"On the Density of Families of sets"
"The Design and Analysis of Efficient Learning Algorithms"
"Sharper Bounds for Gaussian and Empirical Processes"
"A Theory of the Learnable"
"Learning Disjunctions of Conjunctions"
"On the Uniform Convergence of Relative Frequencies of Events to Their Probabilities"
--TR
A theory of the learnable
Learnability and the Vapnik-Chervonenkis dimension
The design and analysis of efficient learning algorithms
Learning in the presence of malicious errors
Efficient noise-tolerant learning from statistical queries
Weakly learning DNF and characterizing statistical query learning using Fourier analysis
Toward Efficient Agnostic Learning
Learning with unreliable boundary queries
On learning from noisy and incomplete examples
Noise-tolerant distribution-free learning of general geometric concepts
A composition theorem for learning algorithms with applications to geometric concept classes
A new composition theorem for learning algorithms
Combinatorial variability of Vapnik-Chervonenkis classes with applications to sample compression schemes
Sample-efficient strategies for learning in the presence of noise
Learning From Noisy Examples
--CTR
Marco Barreno , Blaine Nelson , Russell Sears , Anthony D. Joseph , J. D. Tygar, Can machine learning be secure?, Proceedings of the 2006 ACM Symposium on Information, computer and communications security, March 21-24, 2006, Taipei, Taiwan | nasty noise;PAC learning;learning with noise |
610812 | Foundation of a computable solid modelling. | Solid modelling and computational geometry are based on classical topology and geometry in which the basic predicates and operations, such as membership, subset inclusion, union and intersection, are not continuous and therefore not computable. But a sound computational framework for solids and geometry can only be built in a framework with computable predicates and operations. In practice, correctness of algorithms in computational geometry is usually proved using the unrealistic Real RAM machine model of computation, which allows comparison of real numbers, with the undesirable result that correct algorithms, when implemented, turn into unreliable programs. Here, we use a domain-theoretic approach to recursive analysis to develop the basis of an effective and realistic framework for solid modelling. This framework is equipped with a well defined and realistic notion of computability which reflects the observable properties of real solids. The basic predicates and operations on solids are computable in this model which admits regular and non-regular sets and supports a design methodology for actual robust algorithms. Moreover, the model is able to capture the uncertainties of input data in actual CAD situations. | Introduction
Correctness of algorithms in computational geometry are usually proved using the Real RAM
machine [22] model of computation. Since this model is not realistic, correct algorithms, when
implemented, turn into unreliable programs. In CAGD modeling operators, the effect of rounding
errors on consistency and robustness of actual implementations is an open question, which is
handled in industrial software by various unreliable and expensive "up to epsilon" heuristics that
remain very unsatisfactory.
The authors claim that a robust algorithm is one whose correctness is proved with the assumption
of a realistic machine model [17, 18]. A branch of computer science, called recursive
analysis, defines precisely what it means, in the context of the realistic Turing machine model of
computation, to compute objects belonging to non-countable sets such as the real numbers.
In this paper, we use a domain-theoretic approach to recursive analysis to develop the basis
of an effective framework for solid modeling. The set-theoretic aspects of solid modeling is revis-
ited, leading to a theoretically motivated model that shows some interesting similarities with the
Requicha Solid Model [23, 24]. Within this model, some unavoidable limitations of solid modeling
computations are proved and a sound framework to design specifications for feasible modeling operators
is provided. Some consequences in computation with the boundary representation paradigm
are sketched that can incorporate existing methods [13, 28, 16, 14, 15] into a general, mathematically
well-founded theory. Moreover, the model is able to capture the uncertainties of input
data [8, 19] in actual CAD situations.
We need the following requirements for the mathematical model:
(1) the notion of computability of solids has to be well defined,
(2) the model has to reflect the observable properties of real solids,
(3) the model has to be closed under the Boolean operations,
(4) non-regular sets 1 have to be captured by the model as well,
(5) it has to support a design methodology for actual robust algorithms.
In Section 2, we outline some elements of recursive analysis and domain theory used in subsequent
sections. Section 3 presents the solid domain, a mathematical model for computable rigid solids.
In Section 4, we give an illustration, on a simple case, how one can design a robust algorithm in
the light of our domain-theoretic approach.
Recursive analysis and domain theory
In this section, we briefly outline some elements of recursive analysis [26, 7, 5, 21] and domain
theory [1, 29, 2] that we need in this paper. We first deal with N, the set of all non-negative
integers. A function f : N ! N, is recursive if it is computable by a general purpose computer
(e.g. a Turing machine or a C++ program); this means that there is a finite program written in
some general language such that its output is f(n) whenever its input is any n 2 N. A recursively
enumerable subset of N, or an r.e. set, is the image of a recursive function. A recursive set is
an r.e. set whose complement is also r.e. There are r.e. sets which are not recursive, but their
construction is non-trivial. Next, we consider the set Q of rational numbers. Since Q is countable,
it is in one-to-one correspondence with N and we can write
only if Therefore, computability over Q reduces to computability over N.
The theory of computability over the set of real numbers R, which is uncountable, is more
involved. Since the set of (finite) programs written in a general purpose computer is countable,
it follows that the set of computable real numbers, i.e. those which are the output of a finite
program, is also countable. These can be characterized in terms of recursive functions. A real
number r is computable if there exists recursive functions f and g such that
This means that r is the effective limit of a computable
sequence of rational numbers. We say that a real number is lower (respectively, upper) semi-
computable if it is the limit of an increasing (respectively, decreasing) computable sequence of
rational numbers. It then follows that a real number is computable if and only if it is lower and
upper semi-computable. Similarly, a function f : [a; b] ! R is computable if it is the effective
limit (in the sup norm) of a computable sequence of rational polynomials. Intuitively, in a suitable
representation such as the sign binary system, a real number is given by an infinite sequence of
digits and a computable function is one which can compute any finite part of the output sequence
by reading only a finite part of the input sequence. From this, it follows that a computable function
is always continuous with respect to the Euclidean topology of R.
Domain theory was originally introduced independently by Scott [27] as a mathematical theory
of semantics of programming languages and by Ershov [12] for studying partial computable functionals
of finite type. A domain is a structure for modeling a computational process or a data type
with incomplete or uncertain specified information. It is a partially ordered set where the partial
order corresponds to some notion of information. A simple example is the domain ftt; ff; ?g of the
Boolean values tt and ff together with a least element ? below both. One thinks of ? here as the
1 A set is regular if it the closure of its interior.
undefined Boolean value. A domain is also equipped with a notion of completion (as in Cauchy
completeness for metric spaces) and a notion of approximation. There is a so-called Scott topology
on a domain which is T 0 and is such that every open set is upward closed, i.e. whenever a Scott
open set contains an element, it also contains any element above that element. See the appendix
for precise definitions. A class of so-called !-continuous domains has in recent years been successfully
used in modeling computation in a number of areas of analysis [9]. An !-continuous domain
has a countable subset of basis elements such that every element of the domain can be completely
specified by the set of basis elements which approximate it. One can use this countable basis to
provide an effective structure for the domain and obtain the notions of a computable element of an
effectively given domain and of a computable function between two effectively given !-continuous
domains. We give two examples of useful continuous domains in this section which will motivate
the idea of the solid domain introduced in the next section.
The interval domain I[0; 1] n of the unit box [0; 1] n ae R n is the set of all non-empty n-dimensional
sub-rectangles in [0; 1] n ordered by reverse inclusion. A basic Scott open set is given,
for every open subset O of R n , by the collection of all rectangles contained in O. The map
x is an embedding onto the set of maximal elements of I[0; 1] n . Every
maximal element fxg can be obtained as the least upper bound (lub) of an increasing chain of
elements, i.e. a shrinking, nested sequence of sub-rectangles, each containing fxg in its interior
and thereby giving an approximation to fxg or equivalently to x. The set of sub-rectangles with
rational coordinates provides a countable basis. One can similarly define, for example, the interval
domain IR n of R n . For the interval domains I[\Gamma1; 1] and IR , where R is the one point
compactification of R, have been used to develop a feasible framework for exact real arithmetic
using linear fractional transformations [10, 20].
An important feature of domains, in the context of this paper, is that they can be used to obtain
computable approximations to operations which are classically non-computable. For example,
comparison of a real number with 0 is not computable. However, the function neg : I[\Gamma1;
neg([a; b]) =!
is the best computable approximation to this predicate.
The upper space UX of a compact metric space X is the set of all non-empty compact subsets
of X ordered by reverse inclusion. In fact, UX is a generalization of the interval domain and has
similar properties; for example a basic Scott open set is given, for every open subset O ae X , by the
collection of all non-empty compact subsets contained in O. As with the interval domain, the map
x is an embedding onto the set of maximal elements of UX . The upper
space gives rise to a computational model for fractals and for measure and integration theory [9].
The idea of the solid domain in the next section is closely linked with the upper space of [0; 1] n .
3 A domain-theoretic model
In this section, we introduce the solid domain, a mathematical model for representing rigid solids.
We focus here on the set-theoretic aspects of solid modeling as Requicha did in introducing the
r-sets model [23]. Our model is motivated by requirements (1) to (5) given in the introduction.
For any subset A of a topological space, A, A ffi , @A and A c denote respectively the closure, the
interior, the boundary and the complement of A. The regularization of a subset A is defined, by
Requicha [23, 24], as the subset A ffi . We say that a set is regular if it is equal to its regularization.
3.1 The solid domain
The solid domain S[0; 1] n of the unit cube [0; 1] n ae R n is the set of ordered pairs (A; B) of compact
subsets of [0; endowed with the information order:
. The elements of S[0; 1] n are called partial solids.
Proposition 3.1 (S[0; is a continuous domain and
Proposition 3.2 For any (A; B) 2 S[0; 1] n , there exists a subset Y of [0; 1] n such
One can take for example: We say that represents the
subset Y . It follows that the partial order S[0; 1] n is isomorphic with the quotient of the power
set of [0; 1] n under the equivalence relation with the ordering
Given any subset X of [0; 1] n , the classical membership predicate 2: [0; continuous
except on @X . It follows that the best continuous approximation of this predicate is
where the value ? is taken on @X (recall that any open set containing
? contains the whole set ftt; ff; ?g). Then, two subsets are equivalent if and only if they have
the same best continuous approximation of the membership predicate. By analogy with general
set theory for which a set is completely defined by its membership predicate, the solid domain
can be seen as the collection of subsets that can be distinguished by their continuous membership
predicates. The definition of the solid domain is then consistent with requirement (1) since a
computable membership predicate has to be continuous.
Our definition is also consistent with requirement (2) in a closely related way. We consider the
idealization of a machine used to measure mechanical parts. Two parts corresponding to equivalent
subsets cannot be discriminated by such a machine. Moreover, partial solids, and, more generally,
domain-theoretically defined data types (cf. Section allow us to capture partial, or uncertain
input data [8, 19] encountered in realistic CAD situations.
Starting with the continuous membership predicate, the natural definition for the complement
would be to swap the values tt and ff. This means that the complement of (A; B) is (B; A), cf.
requirement (3).
As for requirement (4), the figure below represents a subset X of [0; 1] 2 that is not regular. Its
regularization removes both the external and internal "dangling edge". This set can be captured in
our framework but not in the Requicha model. Here and in subsequent figures, the two components
A and B of the partial solid are depicted separately below each picture for clarity.
Proposition 3.3 The maximal elements of S[0; 1] n are precisely those that represent regular sets.
In other words, maximal elements are of the form (A; B) such that A and B are regular with
Next we consider the Boolean operators. We first note that the regularized union [23, 24]
of two adjacent three dimensional boxes (i.e. product of intervals) is not computable, since, to
decide whether the adjacent faces are in contact or not, one would have to decide the equality of
two real numbers which is not computable [21]. Requirements (1) and (3) entail the existence of
Boolean operators which are computable with respect to a realistic machine model (e.g. the Turing
machine).
A
In order to define Boolean operators on the solid domain, we obtain the truth table of logical
Boolean operators on ftt; ff; ?g. Consider the logical Boolean operator "or", which, applied to the
continuous membership predicates of two partial solids, would define their union.
This is indeed the truth table for parallel or in domain theory; see [2, page 133]. One can likewise
build the truth table for "and". Note the similarities with the (In,On,Out) points classifications
used in some boundary representation based algorithms [25, 3]. From these truth tables follow the
definition of Boolean operators on partial solids:
Beware that, given two partial solids representing adjacent boxes, their union would not represent
the set-theoretic union of the boxes, as illustrated in the figure below.
A 1o/oo A 2
We have defined the continuous membership predicate for points of [0; 1] n . In order to be able
to compute this predicate, we extend it to the interval domain I[0; 1] n by defining 2: I[0;
A
ff
A A
A A
Proposition 3.4 The following maps are continuous:
ffl 2: I[0;
Similarly, one can define the continuous predicate ae: S[0; ?g.
3.2 Computability on the solid domain
In order to endow S[0; 1] n with a computability structure, we introduce two different countable
bases that lead to the same notion of computability, but correspond to different types of algorithms
in use.
A rational hyperplane is a subset of R n of the form: f(x i
(0 are rational numbers such that at least one of them is non-zero. A rational polyhedron
is a regular subset of [0; 1] n whose boundary is included in a finite union of rational hyper-planes.
Notice that a rational polyhedron may not be connected and may also be the empty set. A dyadic
number is a rational number whose denominator is a power of 2. A dyadic voxel set is a finite
union of cubes, each the product of n intervals whose endpoints are dyadic numbers. Notice that
every voxel set is a rational polyhedra.
A partial rational polyhedron (PRP) is an element (A; B) 2 S[0; 1] n such that A and B are
rational polyhedra. In the following, PRP stands for the set of PRP's. A partial dyadic voxel set
(PDVS) is an element (A; B) 2 S[0; 1] n such that A and B are dyadic voxel sets. PDVS stands
for the set of PVDS's.
The set PRP is effectively enumerable, that is, each PRP can be represented by a finite set
of integers (i.e. the rational coordinates of the vertices and the incidence graph) and there exists
a program to compute a one to one correspondence between N and PRP so that we can write
\Deltag.
Proposition 3.5 PRP forms a countable basis for the solid domain S[0; 1] n . Moreover, the solid
domain is effectively given with respect to the enumeration fR 0 \Deltag of this basis.
Therefore: (i) every element of S[0; 1] n is the least upper bound of a sequence of PRP's approximating
it, and (ii) the predicate R k - R j is r.e. in k; j. In fact, this predicate is recursive, that is,
there exists a program able to decide, for any pair of integers k and j, whether or not R k - R j .
From a more practical point of view, this implies that the Boolean operators on rational polyhedra
are computable (see [6] for an efficient implementation), and that a subset is compact if and
only if it is the intersection of a countable set of rational polyhedra.
By the general notion of computability in domains (see the appendix), an element (A; B) 2
computable if the set fkjR k - (A; B)g is r.e. We obtain the same class of computable
partial solids if we replace the PRP basis with the PDVS basis.
Our notion of computability is somewhat weaker that one could expect. Consider a computable
partial solid (A; B) and a computable point x 2 [0; 1] n n A. There exists a program to compute
an increasing sequence converging to (A; B) and a program to compute an
increasing sequence I k of rational intervals of I[0; 1] n converging to x. From these two programs,
one can obtain a program to compute the increasing sequence of rational numbers representing the
square of the minimum distance between A k and I k . It follows that the minimum distance between
A and x is a lower semi-computable real number. However, this distance may not be computable.
We introduce a stronger notion of computability, that will make the above distance computable.
An element (A; B) 2 S[0; 1] n is recursive if the set fkjR k - (A; B)g is recursive. It can be shown [4]
that (A; B) is recursive if and only if there exists a program to compute two nested sequences of
rational polyhedra such that A and B are the effective limits of the sequences with respect to the
Hausdorff metric. In [4], several related notions of computability for compact sets are given. In
this setting, our notion of computable partial solid (A; B) means that A and B are co.r.e. and our
notion of recursive partial solid means that A and B are recursive. We have now a positive and a
negative result.
Proposition 3.6 The Boolean operators over S[0; 1] n are computable.
However, the intersection of two recursive partial solids may not be recursive as illustrated in
the figure below. The two initial recursive partial solids represent regular sets.
The details of construction of will be presented in the full version of the paper. The
crucial property is that the left endpoint of the lower horizontal line segment is the limit of a
computable, increasing, bounded sequence of rational numbers which is lower semi-computable
but not computable.
The intersection of A 1 and A 2 is therefore a horizontal segment whose left-end point is not
computable. Therefore, requirement(3) prevents us to choose recursive partial solids for our model.
A 1-A 2
However, we can choose the following notion which is stronger than computability but is neither
weaker nor stronger than recursiveness.
We say an element (A; B) 2 S[0; 1] n is Lebesgue computable if it is computable and if the
Lebesgue measures of A and B are computable. Note that
is the Lebesgue measure of C ae R n . Therefore, (A; B) is Lebesgue computable if and only if there
exists a program to compute an increasing sequence
and
Proposition 3.7 The Boolean operators over S[0; 1] n are Lebesgue computable. In other words,
there exists a program that, given two increasing sequences of PRP's defining two partial solids
such that their Lebesgue measures are effectively converging, computes an increasing sequence of
PRP's defining their intersection such that their Lebesgue measures is effectively converging.
A Lebesgue computable partial solid (A; B), with can be manufactured with
an error that can be made as small as we want in volume, assuming an idealized manufacturing
device.
The table below compares in general the three notions for computable solids.
Partial solid Distance to a point Boolean operators Lebesgues measure
computable semi-computable computable non-computable
recursive computable non-computable non-computable
Lebesgues computable semi-computable computable computable
At this stage of our work, our model of choice would be the Lebesgue computable partial solids,
since they are stable under Boolean operators.
4 Robustness Issues
We illustrate, on a very rudimentary class of boundary represented solids, how our domain-theoretic
approach matches requirement (5). Usually, robustness issues show up in two (related) way: (i)
A numerical computation is not well-specified in case of discontinuities, as for example in the
intersection of tangential, partially overlapping surfaces. (ii) The values of the logical predicates
evaluated from numerical computations are inconsistent, resulting in an invalid output or the
catastrophic failure of the algorithm.
4.1 The disk domain
We consider d, the set of disks in the Euclidean plane. Each disk a of d is represented by the three
real numbers giving the coordinates of the center and the radius: (x a ; y a ; R a ), with R a - 0. By an
abuse of notation, such an element a denotes both the real triple defining it and the corresponding
disk in the plane; the context always makes it clear which meaning we have in mind.
We now define the domain D of interval disks. It is the set of interval triples
with
K
K and
add the bottom element ? to D and partially
order it with reverse inclusion: K v L 3). The domain D is isomorphic
with its maximal elements can be identified with the elements of d. An element
is said to be rational if x \Gamma
K and R
K are rational numbers. ?From the
general theory of computability in domains (see the appendix), K is computable if it is the least
upper bound of an increasing computable sequence of rational interval disks, that is if there exists
a program to compute such an increasing sequence. This definition is consistent with the solid
domain introduced in Section 3 as we have the where the image f(K) of an
interval disk K 2 D is the partial solid
a
It can be easily shown that f is monotonic, continuous and in fact computable with respect to
the natural effective structure on D induced from I(R 2 \Theta R + ). When restricted to interval disks
contained in [0; 1] 2 , f is in fact an embedding.
4.2 The domain of the relative position of disks
We consider here the combinatorial part of the computation of Boolean operators over disks. For
this purpose we consider the following map from d \Theta d to R 3
and the predicates 3 from d \Theta d to the domain f\Gamma; +; ?g defined, for
0:
The domain topology on f\Gamma; +; ?g ensures that these predicates are continuous. Because of the
inequalities, the range of made of 11 values,
defining the relative position of the two disks. We denote this set of 11 values by F which is a
subset of the domain f\Gamma; +; ?g 3 , whose order relation, induced by the order relation on f\Gamma; +; ?g,
is represented in the figure below.
4.3 Extension to D and actual computation
We define the
Where inf denotes infimum or the greatest lower bound, which exists for every subset since F is
a bounded complete domain (see the appendix). P is the best continuous extension of p. It is
possible to compute the image P (K; L) of any pair (K; L) of rational interval disks, as this reduces
to the evaluation of the sign of a few polynomials over Q (see [6]). Then, from two increasing
sequences of rational interval disks (increasing with respect to v) defining a pair of interval disks,
one can compute an increasing sequence in F defining their relative position.
The actual image is computed after a finite time. However, when this image is not a maximal
element of F, one never knows if the output will be refined by using a more accurate input (i.e.
more terms from the two rational interval disk sequences). This behaviour is consistent with
requirement (2): in the physical world, the statement "two disks are tangent", for example, means
that there are tangent up to a relevant accuracy and a more accurate measuring may reveal that
they actually intersect or are in fact disjoint.
a b
a b
a b
a b
a b
a b
a b a
A# A#
A# A#A#A A#A
#A# A#A A
#A A
a b
A# A#A #A A# A#A# A#
a a
5 Conclusion
The solid domain described here satisfies the requirements of computability, having observable
properties, closure under Boolean operations, admission of non-regular sets and the design of robust
algorithms as stated in the introduction. The classical analysis framework, allowing discontinuous
behaviour and exact real number comparisons, is neither realistic as a model of our interaction with
the physical world (measuring, manufacturing), nor realistic as a basis for the design of algorithms
implemented on realistic machines, which are only able to deal with finite data. The authors believe
that the domain-theoretic approach, even at this initial stage of application to solid modeling and
computational geometry, is a powerful mathematical framework both to model partial or uncertain
data and to guide the design of robust software.
The solid model can be defined for a more general class of topological spaces, in particular for
locally compact Hausdorff spaces such as R n . It can also be represented equivalently in terms of
pairs of open sets or equivalently in terms of continuous functions from the space to the Boolean
domain ftt; ff; ?g. We will deal with these issues in a future paper.
In our future work, we will use the domain-theoretic framework to capture more information
on solids and geometric objects. In particular, we will deal more generally with the boundary representation
and the differential properties of curves and surfaces (that is the C k or G k properties).
We will also focus on actual computations, applying the methodology illustrated in Section 4 to
more complex situations.
Appendix
In this section, we give the formal definitions of a number of notions in domain theory used in the
paper. We think of a partially ordered set (poset) (P; v) as the set of output of some computation
such that the partial order is an order of information: in other words, a v b indicates that a has
less information than b. For example, the set f0; 1g 1 of all finite and infinite sequences of bits 0
and 1 with a v b if the sequence a is an initial segment of the sequence b is a poset and a v b
simply means that b has more bits of information than a. A non-empty subset A ' P is directed
if for any pair of elements there exists c 2 A such that a v c and b v c. A directed set
is therefore a consistent set of output elements of a computation: for every pair of output a and
b, there is some output c with more information than a and b. A directed complete partial order
(dcpo) or a domain is a partial order in which every directed subset D ' P has a least upper
bound (lub) denoted
F
A. It is easily seen that f0; 1g 1 is a dcpo. We say that a dcpo is pointed
if it has a least element which is usually denoted by ? and is called bottom.
For two elements a and b of a dcpo we say a is way-below or approximates b, denoted by a - b,
if for every directed subset A with b v
F
A there exists c 2 A with a v c. The idea is that
a is a finitary approximation to b: whenever the lub of a consistent set of output elements has
more information than b, then already one of the input elements in the consistent set has more
information than a. In f0; 1g 1 , we have a - b iff a v b and a is a finite sequence. The closed
subsets of the Scott topology of a domain are those subsets C which are downward closed (i.e.
closed under taking lub's of directed subsets (i.e. for every directed
subset A ' C we have
F
A 2 C).
A basis of a domain D is a subset B ' D such that for every element x 2 D of the domain
the set B fy 2 Bjy - xg of elements in the basis way-below x is directed with
F
An (!)-continuous domain is a dcpo with a (countable) basis. In other words, every element
of a continuous domain can be expressed as the lub of the directed set of basis elements which
approximate it. A domain is bounded complete if every bounded subset has a lub; in such a domain
every subset has an infimum or greatest lower bound. One can easily check that f0; 1g 1 is an
!-continuous dcpo for which the set of finite sequences form a countable basis. It can be shown
that a function f dcpo's is continuous with respect to the Scott topology if and
only if it is monotone (i.e. a v b ) f(a) v f(b)) and preserves lub's of directed sets i.e. for any
directed A ' D, we have f(
F
F
a2A f(a).
An !-continuous domain D with a least element ? is effectively given wrt an enumeration of
a countable base \Deltag with b if the set f! m;n is r.e., where
is the standard pairing function i.e. the isomorphism (x; y) 7! (x+y)(x+y+1)
This means that there is a master program which generates all pairs of basis elements (b
We say x 2 D is computable if the set fnjb n - xg is r.e. This is equivalent to say that
there is a recursive function g such that (b g(n) ) n-0 is an increasing chain in D with
F
We say that a continuous effectively given !-continuous domains D (with basis
computable if the set f! m;n ? jb m - f(an )g is
r.e. This is equivalent to say that f maps computable elements to computable elements in an
effective way. Every computable function can be shown to be a continuous function [30, Theorem
3.6.16]. It can be shown [11] that these notions of computability for the domain IR of intervals
of R induce the same class of computable real numbers and computable real functions as in the
classical theory [21] described in Section 2.
Acknowledgements
The first author has been supported by EPSRC and would like to thank the hospitality of the
Institute for Studies in Theoretical Physics and Mathematics in Tehran where part of this work
was done.
--R
Domain theory.
Domains and Lambda-Calculi
Toward a topology for computational geometry.
Computability on subsets of Euclidean space I: Closed and compact subsets.
Computing exact geometric predicates using modular arithmetic with single precision.
An Introduction to Recursive Function Theory.
Robustness of numerical methods in geometric computation when problem data is uncertain.
Domains for computation in mathematics
A new representation for exact real numbers.
A domain theoretic approach to computability on the real line.
Computable functionals of finite types.
Epsilon Geometry
Towards Robust Interval Solid Modeling of Curved Objects.
Robust interval algorithm for curve intersections.
Boundary Representation Modelling with local Tolerances.
Repr'esentation b.
Toward a data type for Solid Modeling based on Domain Theory.
Algorithmic tolerances and semantics in data exchange.
Efficient on-line computation of real functions using exact floating point
Computability in Analysis and Physics.
Computational Geometry: an introduction.
Mathematical Foundations of Constructive Solid Geometry
Representation for Rigid Solids
Boolean Operations in Solid Modeling: Boundary Evaluation and Merging Algorithms.
Outline of a mathematical theory of computation.
Using tolerances to guarantee valid polyhedral modeling results.
Mathematical Theory of Domains
--TR
Computational geometry: an introduction
Computability
Epsilon geometry: building robust algorithms from imprecise computations
Using tolerances to guarantee valid polyhedral modeling results
Dynamical systems, measures, and fractals via domain theory
Boundary representation modelling with local tolerances
Effective algebras
Domain theory
Towards robust interval solid modeling of curved objects
Algorithmic tolerances and semantics in data exchange
A domain-theoretic approach to computability on the real line
Foundation of a computable solid modeling
Domains and lambda-calculi
Computability on subsets of Euclidean space I
Computable banach spaces via domain theory
Representations for Rigid Solids: Theory, Methods, and Systems
Type Theory via Exact Categories
On The Measure Of Two-Dimensional Regions With Polynomial-Time Computable Boundaries
--CTR
Martin Ziegler, Effectively open real functions, Journal of Complexity, v.22 n.6, p.827-849, December, 2006
Abbas Edalat , Andr Lieutier, Domain theory and differential calculus (functions of one variable), Mathematical Structures in Computer Science, v.14 n.6, p.771-802, December 2004 | solid modelling;robustness;domain theory;turing computability;model of computation |
610819 | Real number computation through gray code embedding. | We propose an embedding G of the unit open interval to the set {0, 1},1 of infinite sequences of {0, 1} with at most one undefined element. This embedding is based on Gray code and it is a topological embedding with a natural topology on {0, 1},1. We also define a machine called an indeterministic multihead Type 2 machine which input/output sequences in {0, 1},1, and show that the computability notion induced on real functions through the embedding G is equivalent to the one induced by the signed digit representation and Type 2 machines. We also show that basic algorithms can be expressed naturally with respect to this embedding. | Introduction
One of the ways of dening computability of a real function is by representing a real
number x as an innite sequence called a name of x, and dening the computability
of a function by the existence of a machine, called a Type-2 machine, which inputs
and outputs the names one-way from left to right. This notion of computability dates
back to Turing[Tur36], and is the basis of eective analysis [Wei85,Wei00].
This notion of computability depends on the choice of representation we use, and
signed digit representation and equivalent ones such as the Cauchy representation
and the shrinking interval representation are most commonly used; they have the
property that every arbitrarily small rational interval including x can be obtained
from a nite prex of a name of x, and therefore induces computability notion that
a function f is computable if there is a machine which can output arbitrary good
approximation information of f(x) as a rational interval when arbitrary good approximation
information of x as a rational interval is given. The naturality of this
computability notion is also justied by the fact that it coincides with those de-
Preprint submitted to Elsevier Science 15 June 2000
ned through many other approaches such as Grzegorczyk's ([Grz57]), Pour-El and
Richards ([PER89]), and domain theoretic approaches([ES98], [Gia99]).
One of the properties of these representations is that they are not injective [Wei00].
More precisely, uncountably many real numbers have innitely many names with
respect to representations equivalent to the signed digit representation [BH00]. This
kind of redundancy is considered essential in many approaches to exact real arithmetic
[BCRO86,EP97,Gia96,Gia97,Vui90].
Thus, computability of a real function is dened in two steps: rst the computability
of functions over innite sequences is dened using Type-2 machines, and then it is
connected with the computability of real functions by representations. The redundancy
of representations means that we cannot dene the computability of a real
function more directly by considering an embedding of real numbers into the set of
innite sequences on which a Type-2 machine operates. In this paper, we consider
such a direct denition by extending the notion of innite sequences and modifying
the notion of computation on innite sequences.
Our embedding, called the Gray code embedding, is based on the Gray code expan-
sion, which is another binary expansion of real numbers. The target of this embedding
is the set f0; 1g !
?;1 of innite sequences of f0; 1g in which at most one ?, which means
undenedness, is allowed. We dene the embbeding G of the unit open interval I,
and then explain how it can be extended to the whole real line in the nal section.
?;1 has a natural topological structure as a subspace of f0; . We show
that G is a topological embedding from I to the space f0; 1g !
?;1 .
Because of the existence of ?, a machine cannot have sequential access to inputs
and outputs. However, because ? appears only at most once, we can deal with it by
putting two heads on a tape and by allowing indeterministic behavior to a machine.
We call such a machine Indeterministic Multihead Type 2 machine (IM2-machine for
short). Here, indeterministic computation means that there are many computational
paths which will produce valid results [She75,Bra98]. Thus, we dene computation
over
using IM2-machines, and consider the induced computational notion on
I through the embedding G. We show that this computational notion is equivalent
to the one induced by the signed digit representation and Type-2 machines.
We also show how basic algorithms like addition can be expressed with this represen-
tation. One remarkable thing about this representation is that it has three recursive
structures though it is characterized by two recursive equations. This fact is used in
composing basic recursive algorithms.
We introduce Gray code embedding in Section 2 and an IM2 machine in Section 3.
Then, we dene the Gray code computability of real functions in Section 4, and show
that it is equivalent to the computability induced by the signed digit representation
and Type 2 machines in Section 5. In Section 6, we study topological structure.
number Binary code Gray code
9 1001 1101
Fig. 1. Binary code and Gray code of integers
In Section 7 and 8, we consider basic algorithms with respect to this embedding.
We will discuss how this embedding can be extended to R, give some experimental
implementations, and give conclusion in Section 9.
Notation: Let be an alphabet which does not include ?. We write for the set of
nite sequences of , ! for the set of innite sequences of , and !
for the set of innite sequences of in which at most n instances of the undenedness
character ? are allowed to exist. We
from X to Y , and multi-valued function from X to Y , that
is, F is a subset of X Y considered as a partial function from X to the power set
of Y . We call a number of the form m 2 n for integers m and n a dyadic number.
Gray Code Embedding
Gray code is another binary encoding of natural numbers. Figure 1 shows the usual
binary code and the Gray code of integers from 0 to 15. In this way, n-bit Gray code
is composed by putting the n-th bit on and reversing the order of the coding up to (n-
1)-bits, instead of repeating the coding up to (n-1)-bits as we do in the usual binary
code. The importance of this code lies in the fact that only one bit diers between
the encoding of a number and that of its successor. This code is used in many areas
of computer science such as image compression [ASD90] and nding minimal digital
circuits [Dew93].
The conversion between these two encodings is easy. Gray code is obtained from the
usual binary code by taking the bitwise xor of the sequence and its one-bit shift.
Therefore, the function to convert from binary code to Gray code is written using
the notation of a functional language Haskell [HJ92] as follows:
This conv function has type [Int] -> [Int], where [Int] is the Haskell type of
(possibly innite) list of integers. a:b means the list composed of a as the head and
b as the tail, xor is the \exclusive or" dened as
xor (0,
xor (0,
xor (1,
xor (1,
and zip is a function taking two lists (of length l and m) and returning a list of pairs
(of length min(l; m)). This conversion is injective and the inverse is written as
with [] the empty list.
We will extend this coding to real numbers. Since the function conv is applicable to
innite lists, we can obtain the Gray code expansion of a real number x by applying
conv to the binary expansion of x.
The Gray code expansion of real numbers in the unit interval I = (0; 1) is visualized
in
Figure
2. Here, a horizontal line means that the corresponding bit has value 1
on the line and value 0 otherwise. This gure has a ne fractal structure and shows
symmetricity of bits greater than n at every dyadic number m 2 n .
bit3
bit6Fig. 2. Gray code of real numbers
In the usual binary expansion, we have two expansions for dyadic numbers. For
example, can be expressed as 0:110000::: and also as 0:101111:::. This is also the
case for the Gray code expansion. For example, by applying conv to these sequences,
we have the two sequences 0:101000::: and 0:111000::: of However, one can nd
that the two sequences dier only at one bit (in this case, the 2nd). This means that
the information that this number is only by the remaining bits and the
2nd bit does not contribute to this fact. Therefore, it would be natural to introduce
the character ? denoting undenedness and consider the sequence 0:1?1000::: as the
unique representation of Note that the sequence after the bit where they dier
is always 1000:::. Thus, we dene Gray code embedding of I as a modication of
the Gray code expansion in that a dyadic number is represented as s?1000::: with
Denition 1 The Gray code embedding of the unit open interval I is an injective
function G from I to !
?;1 which maps x to an innite sequence a 0 a
as follows: a for an odd number m,
a if the same holds for an even number m, and a
for some integer m. We call G(x) the modied Gray code expansion of x, or simply
Gray code of x.
When or 0, according as x is bigger than, equal to, or less
. The tail function which maps x to G 1 (a 1 a denotes the so-called tent
It is in contrast to the binary expansion in that the tail function of the binary
expansion denotes the function
Note that Gray code expansion coincides with the itinerary by the tent map which
is essential for symbolic dynamical systems [HY84].
3 Indeterministic Multihead Type 2 Machine
Consider calculating a real number x (0 < x < 1) as the limit of approximations and
output the result as the modied Gray code expansion. More precisely, we consider a
calculation which produces shrinking intervals (r successively so
that lim n!1 s
When we know that x < n), we can write 0 as the rst digit.
And when we know that n), we can write 1. However,
when neither will happen and we cannot ever write the rst digit. Even so,
when we know that we can skip the rst and write 1 as the second digit,
and when we know that as the third digit. Thus, when
, we can continue producing the digits skipping the rst one and we can write
the sequence from the second digit. In order to produce the Gray code of
x as the result, we need to ll the rst cell with ?, which is impossible because we
cannot obtain the information in a nite time. To solve this, we dene ? as
the \blank character" of the output tape and consider that the output tape is lled
with ? at the beginning. Thus, when a cell is skipped and is not lled eternally, it is
left as ?.
Suppose that we know we have written the second digit as 1 skipping
the rst one. As the next output, we have two possibilities: to write the third digit
as 0 because we know that or to write the rst digit because we obtain
the information x < Therefore, when we consider a machine with Gray
code output, the output tape is not written one-way from left to right. To present
this behavior in a simple way, we consider two one-way heads H 1 (O) and H 2 (O) on an
output tape O which move automatically after an output. At the beginning, H 1 (O)
and H 2 (O) are located above the rst and the second cell, respectively. After an output
from H 2 (O), H 2 (O) is moved to the next cell, and after an output from H 1 (O), H 1 (O)
is moved to the position of H 2 (O) and H 2 (O) is moved to the next cell. Thus, in order
to ll the output tape as
Here, H(j) (H is H 1 (O) or H 2 (O) and means to output j from H . With this
head movement rule, each cell is lled at most once and a cell is not lled eternally
only when H 1 (O) is located on that cell and output is made solely from H 2 (O).
are on the s-th and t-th cell of an output tape, the i-th cells
(i < s; s < i < t) are already output and no longer accessible. Therefore, H 1 (O) and
are always located at the rst and the second unlled cells and the machine
treats the tape as if it were [O[s];
Next, we consider how to input a modied Gray code expansion of a real number.
We dene our input mechanism so that nite input contains only approximation
information. Therefore, our machine should not recognize that the cell under the
head is ?, because the character ? with its preceding prex species the number
exactly. This requirement is also supported by the way an input tape is lled when
it is produced as an output of another machine; the character ? may be overwritten
by 0 or 1 in the future and it is impossible to recognize that a particular cell is left
eternally as ?. Therefore, our machine needs to have something other than the usual
sequential access.
To solve this, we consider multiple heads and consider that the machine waits for
multiple cells to be lled. Since at most one cell is left unlled, two heads are su-cient
for our purpose. Therefore, we consider two on an input tape
I, which move in the same way as output heads when they input characters. Note
that the character ? cannot be recognized by our machine, unlike the blank character
used by a Turing machine.
Thus, we dene a machine which has two heads on each input/output tape. Though
we have explaind this idea based on the modied Gray code expansion, this machine
can input/output sequences in !
?;1 generally. In order to give the same computational
power as a Turing machine, we consider a state machine controlled by a set of computational
rules, which has some ordinary work tapes in addition to the input/output
tapes.
In order that the machine can continue working even when the cell under H 1 (I) or
for an input tape I is ?, we need, at each time, a rule applicable only reading
from H 1 (I) or H 2 (I). Therefore, the condition part of each rule should not include
input from both H 1 (I) and H 2 (I). This also means that, if both head positions of an
input tape are lled, we may have more than one applicable rules. Since a machine
may execute both rules, both computational paths should produce valid results.
To summarize, we have the following denition.
Denition 2 Let be the input/output alphabet. Let be the work-tape alphabet
which includes a blank character B. An indeterministic multihead Type 2 machine
(IM2-machine in short) with k inputs is composed of the following:
tapes named I 1 ; I one output tape named O. Each tape T
has two heads
(ii) several work tapes with one head,
(iii) a nite set Q of states with one initial state q 0 2 Q,
(iv) computational rules of the following form:
Here, q and q 0 are states in Q, i j are heads of dierent input tapes, o is a head of
the output tape, w
, and w 00
are heads of work tapes, c j (j
c are characters from , d j and d 0
are characters from , and M j (j
are '+' or ' '. Each part of the rule is optional; there may be a rule without o(c),
for example. The meaning of this rule is that if the state is q and the characters
under the heads i are c j and d e , respectively,
then change the state to q 0 , write the characters c and d 0
the heads
, respectively, move the heads w 00
or backward depending on whether M move the heads of
input/output tapes as follows. For each when it is a head
moved to the position of H 2 (T ) and H 2 (T ) is moved
to the next cell, and when it is H 2 (T ), the position of H 1 (T ) is left unchanged
moved to the next cell.
The machine starts with the output tape lled with ?, work tapes lled with B, the
state set to q 0 , the heads of work tapes located on the rst cell, and the heads H 1 (T )
of an input/output tape T are located above the rst and the second cell,
respectively. At each step, the machine chooses one applicable rule and applies it.
When more than one rules is applicable, only one is selected in a nondeterministic
way.
Note 1. We can dene an indeterministic multihead Type 2 machine more generally
in that each input/output tape may have
input/output sequences in !
. We dene the head movements after
an input/output operation as follows. If input/output is made from H l (T ) (l n)
are moved to the position of H j+1 (T ) and H n+1 (T ) is moved
to the next cell. If input/output is made from H n+1 (T ) then H n+1 (T ) is moved to
the next cell. Note that when
?;0 is nothing but ! and a tape has only one
head which moves to the next cell after an input/output.
Note 2. Here, we acted as if the full contents of the input tapes were given at the
beginning. However, an input is usually generated as an output of another machine,
and given incrementally. In this case, the machine behaves like this: it repeats executing
an applicable rule until no rule is applicable, and waits for input tapes to be
lled so that one of the rules become applicable, and repeats this process indenitely.
Note 3. A machine can have dierent input/output types on the tapes. The in-
put/output types we consider are
?;n (n 0) and , where we may write ! for
?;0 . We extend an IM2-machine with a sequence (Y indicating that it
has k input tapes with type Y one output tape of type Y 0 . When
Y i is !
?;n , the corresponding tape has the properties written in Note 1. When Y i
is , the corresponding tape has the alphabet [ fBg and it has one head which
moves to the next cell when it reads/writes a character. In this case, the blank cells
are initialized with B. In addition, when Y 0 is , we consider that the machine has
a halting state at which the machine stops execution.
Gray Code Computability of Real Functions
As we have seen, an IM2-machine has a nondeterministic behavior and thus it has
many possible outputs to the same input. Therefore, we consider that an IM2-machine
computes a multi-valued function. Note that multi-valued functions appear naturally
when we consider computation over real numbers [Bra98].
Denition 3 An IM2-machine M with k inputs realizes a multi-valued function
?;1 if all the computational paths M have with the input tapes lled
with (p outputs, and the set of outputs forms a
subset of F (p We say that F is IM2-computable when it is realized by some
IM2-machine.
This denition can be generalized to a multi-valued function
for the case Y i is or !
Note that our nondeterministic computation is dierent from nondeterminism used,
for example, in a non-deterministic Turing machine; a non-deterministic Turing machine
accepts a word when one of the computational paths accepts the word, whereas
all the computational paths should produce valid results in our machine. To distin-
guish, we use the word indeterminism instead of nondeterminism following [She75]
and [Bra98].
Denition 4 A multi-valued function F : I I is realized by M if G(F ) is
realized by M . We say that F is Gray-code-computable if G - F - G 1 is IM2-
computable.
Denition 5 A partial function f : I k ! I is Gray-code-computable if it is
computable as a multi-valued function.
5 Equivalence to the Computability induced by the Signed Digit Repre-
sentation
Now, we prove that Gray code computability is equivalent to the computability induced
by a Type-2 machine and the (restricted) signed digit representation.
Denition 6 A Type-2 machine is an IM2-machine whose type includes only !
and , and whose computational rule is deterministic.
This denition is equivalent to the one in [Wei00].
Proposition 1 Let Y i be ! or There is an IM2-machine which
there is a deterministic IM2-machine which
computes F .
Proof: The if part is immediate. For the only if part, we need to construct a deterministic
machine from an indeterministic machine for the case that the input/output
tapes have only one head. Suppose that M is an IM2-machine which realizes F .
Since the set of rules of M is nite, we give a numbering to them. We can determine
whether or not each rule is applicable because the input tapes do not have the character
?. Therefore, we can modify M to construct a deterministic machine M 0 which
chooses the rst applicable rule with respect to the numbering. The result of M 0 to
uniquely determined and is in F (x).
Denition 7 A representation of a set X is a surjective partial function from ! to
X.
If is a representation of X and a -name of x.
Denition 8 I and - I be representations. We say that
- is reducible to - 0 (- 0 ) when there is a computable function f
that dom(-). We say that - and - 0 are equivalent ( - 0 )
when - 0 and - 0 -.
Denition 9 1) The signed digit representation sd of I uses the alphabet
denoting 1, and it is a partial function sd I dened on
such that a j 6= 1 and a l 6= 1g
and returns 1
a i 2 i to a 1 a
2) The restricted signed digit representation sdr of I is a restriction of sd to a
smaller domain
such that a j 6= 1 and a l 6= 1g
without the rst character a 1 (= 1).
By sd , 3=8 has innitely many names
:. The domain of sdr means that we do not use a name which lasts as
therefore 3=8 has only two sdr -names
Proposition 2 sdr sd .
Proof: It is an easy exercise to give an algorithm that converts a sd -name to a
sdr -name.
Denition I be a representation of I. A multi-valued function
I is (-computable if there is a Type-2 machine M of type
that if -(p) 2 dom(F ), then M with input p produces an innite sequence q such that
A partial function is (-computable if it is computable as a multi-valued function.
This denition can easily be extended to a function with several arguments.
Equivalent representations induce the same computability notion on I. As we explained
in the introduction, the equivalence class to which signed digit representation
belongs induces a suitable notion of computability on real numbers.
Proposition 3 Let M be an IM2-machine which realizes a multi-valued function
be IM2-machines which realize multi-valued
functions
Suppose that Im(hG
Then, there is an IM2-machine M -hN which realizes the multi-valued function
Here, the composition of multi-valued
functions F and G is dened to be y 2
Proof: First, we consider the case We write N for N 1 . We use the input
tapes of N as those of M - N and the output tape of M as that of M - N . We use
a work tape T with the alphabet [ fBg which connects the parts representing N
and M , and work tapes to simulate the head movements of the input tape of M and
the output tape of N . It is easy to change the rules of M and N so that M reads
from T and N writes on T . We also need to modify the rules so that it rst looks for
an applicable rule coming from M and if there is no such rule, then looks for a rule
coming from N . It is possible because the former rules do not access to the input
tapes and therefore a machine can determine whether a particular rule is applicable
or not.
When k > 1, we need to copy the input tapes onto work tapes so that they can be
shared by the parts representing N . We dene that it executes rules coming
from N i until it outputs a character, and then switch to the next part.
As we will show in Section 7, we have the followings.
Lemma 4 There is an IM2-machine of type (f1; 0; 1g
?;1 ) which converts a
sdr -name of x to G(x) for all x 2 I.
Lemma 5 There is an IM2-machine of type (f0; 1g !
G(x) to the sdr -name of x for all x 2 I.
Now, we prove the equivalences.
Theorem 6 A multi-valued function F : I k ! I is Gray-code-computable i it is
Proof: Suppose that M is an IM2-machine which Gray code computes F . By composing
it with the IM2-machines in Lemma 4 and Lemma 5, we can form, by Proposition
3, an IM2-machine of type ((f1; 0; outputs a sdr -name
of a member of F -names of x i are given. Therefore, we have a
desired Type-2 machine by Proposition 1.
On the other hand, suppose that there is a Type-2 machine which (( sdr
computes F . Since a Type-2 machine is a special case of an IM2-machine, again, by
composing the IM2-machines in Lemma 4 and Lemma 5, we can form an IM2-machine
which Gray code computes F .
6 Topological Properties
1g. In this section, we show that G from I to !
?;1 is homeomorphic, and
therefore is a topological embedding.
Since the character ? may be overwritten by 0 or 1, it is not appropriate to consider
Cantor topology on !
?;1 . Instead, we dene the order structure ? < 0 and ? < 1 on
our alphabet and consider the Scott topology on f0;
We consider its product topology on f0; its subspace topology
on !
?;1 . Let " p denote the set fx j p xg. Then, the set f" (d?
is a base of f0; From this, we have a base f"
?;1 .
Note that P corresponds to the states of output tapes of IM2-machines after a nite
time of execution, and " (d?
?;1 is the set of possible outputs of an IM2-machine
after it outputs d 2 P . Thus, if q 2 O for an open set O !
?;1 and for an output q
of an IM2-machine, then this fact is available from a nite time of execution of the
machine. In this sense, the observation that open sets are nitely observable properties
in [Smy92] holds for our IM2-machine. We can prove the following fundamental
theorem in just the same way as we do for Type-2 computability and Cantor topology
on f0; 1g ! .
Theorem 7 An IM2-computable function f
?;1 is continuous.
Now, Im(G) is the set f0; 1g ! f0; 1g
?;1 . We also consider the
subspace topology on Im(G), which has the base f"
g. We consider the inverse image of this base by G. When d 2 f0; 1g
range over open intervals of the form
respectively, for m and i integers. Since these open intervals form a base of the unit
open interval I, I and Im(G) become homeomorphic through the function G. Thus,
we have the following:
Theorem 8 The Gray code embedding G is a topological embedding of I into !
?;1 .
As a direct consequence, we have the following:
Corollary 9 A Gray-code-computable function f : I k ! I is continuous.
As an application of our representation, we give a simple proof of Theorem 4.2.6 of
which says that there is no eective enumeration of computable real numbers.
Here, we dene to be a computable sequence if there is an IM2-machine of
outputs G(x i ) when a binary name of i is given.
Theorem is a computable sequence, then a computable number x with
x exists.
be an IM2-machine which computes s i to the binary
name of i. By Proposition 1, we can assume that M is deterministic. This means that,
by selecting one machine, the order the output tape is lled is xed. Since s
either s i [2i] or s i [2i + 1] is written in a nite time. When s i [2i] is written rst, we
put rst, we put
not is dened as not
not Then, the resulting sequence t is computable and is in Im(G), but is not
equal to s i for (i 2 !). Therefore, G 1 (t) is not equal to x i because of the injectivity
of the representation.
7 Conversion with signed digit representation
As an example of an IM2-machine, we consider conversions between the Gray code
and the restricted signed digit representation. Recall that a sdr -name of x 2 I is
given as a sequence 1 : xs with xs an innite sequence of f0; 1; 1g. In this section, we
consider xs as the sdr -name of x.
Since the intervals represented by nite prexes of both representation coincide, the
conversions become simple automaton-like algorithms which do not use work tapes.
Example 1 Conversion from the signed digit representation to Gray code. It has the
type
simply write the head of the input tape as I. It has
four states (i; the initial state, and 12 rules:
In order to express this more simply, we use the notation of the functional language
Haskell as follows:
ds
ds
Here, where produces bindings of c and ds to the head and the tail of stog0
(xs,0,1), respectively. It is clear that the behavior of an IM2-machine can be expressed
using this notation with the state and the contents of the work tapes before
and after the head positions passed as additional arguments. In the program
stog0, the states are used to invert the output: the result of stog0(xs; 1; 0) is that of
stog0(xs; 0; 0) with the rst character inverted, and the result of stog0(xs; 0; 1) is
stog stog0 0
input tape
output tape
Fig. 3. The behavior of stog IM2-machine when it reads 0
that of stog0(xs; 0; 0) with the second character inverted. Therefore, we can simplify
the above program as follows:
ds where c : ds = stog xs
Here, nh is the function to invert the rst element of an innite list. That is,
not
not
not s:ds
The behavior of stog with input 0:xs is given in Figure 1. Here, a small circle on an
output head means to invert the output from that head before lling the tape.
The program stog is a correct Haskell program and works on a Haskell system.
However, if we evaluate stog([0,0.]), there will be no output because it tries to
calculate the rst digit, which is ?. Of course, tail(stog([0,0.])) produces the
answer [1,0,0,0,.
Next, we consider the inverse conversion, which is an example of Gray code input.
Example 2 Conversion from Gray code to signed digit representation. Now, we only
show a Haskell program. It has the type (f0; 1g !
In this case, indeterminism occurs and yields many dierent valid results: the results
are actually signed digit representations of the same number. This is also a correct
Haskell program. However, it fails to calculate, for example, gtos(stog([0,0.]))
because the program gtos, from the rst two rules, tries to pattern match the head
of the argument and starts its non-terminating calculation. Therefore, it fails to use
the third rule. This is a limitation of the use of an existing functional language. We
will discuss how to implement an IM2-machine as a program in Section 9.
These programs are based on the recursive structure of the Gray code and is not as
di-cult to write such a program as one might imagine. One can see from Figure 2
the following three recursive equations:
(1)
Here, (p). The rst equation corresponds to the fact that on the interval
with the rst bit 0, i.e. the left half of Figure 2, the remaining bits form a 1 / 2 reduction
of
Figure
2. The second equation corresponds to the fact that on the interval with the
rst bit 1, i.e. the right half of Figure 2, the remaining bits with the rst bit inverted
form a 1 / 2 reduction of Figure 2, and if we use the equation with parenthesis, we can
also state that the remaining bits form the reversal of Figure 2. These two equations
characterize Figure 2. One interesting fact about this representation is that we also
have the third equation. It says that on the interval with the second bit 1, i.e. the
middle half of Figure 2, the remaining bits with the second bit inverted form a 1 / 2
reduction of Figure 2.
From Equations (1), we have the following recursive scheme.
Here, g 1 is a function to calculate f(x) from f(2x) when 0 < x is a function to
calculate f(x) from f(2x 1) when 3 is a function to calculate f(x)
from derived immediately from this scheme.
On the other hand, Equations (1) can be rewritten as follows:
stog uses this scheme to calculate the gray code output. These recursive schemes are
used to derive the algorithm for addition in the next section.
8 Some simple algorithms in Gray code
We write some algorithms with respect to Gray code.
Example 3 Multiplication and division by 2. They are simple shifting operations.
(suppose that the input is 0 < x < 1/2)
Example 4 The complement x 7! 1 x. It is a simple operation to invert the rst
digit, i.e.,the nh function in Example 1. Note that with the usual binary representation
and the signed digit representation, we need to invert all the bits to calculate 1 x
and thus this operation needs to be dened recursively. We can also see that the
complement operation (x 7! k=2 n x) with respect to a dyadic number k=2 n+1 for
can be implemented as inverting one digit.
Example 5 Shifting x 7! x Addition with a dyadic number is
nothing but two continuous complement operations over dyadic numbers. In the case
of the rst axis is and the second axis is Therefore, the function
AddOneOfTwo
operates as x 7! x
Example 6 Addition
We consider addition x+y with 0 < x; y < 1. Since the result is in (0; 2), we consider
the average function
pl (0:as)
pl (1:as)
pl (0:as)
pl (1:as)
pl (a:1:as)
pl (a:1:0:as)
pl (a:1:0:as)
pl (a:1:0:as)
pl (a:1:0:as)
pl (0:0:as)
pl (1:0:as)
pl (0:a:1:as)
pl (1:a:1:as)
To calculate the sum with respect to the signed digit representation, we need to look
ahead two characters. It is also the case with the Gray code representation. Since it
does not have redundancy, we can reduce the number of rules from 25 to 13 compared
with the program written in the same way with the signed digit representation.
9 Extension to the Whole Real Line, Implementation, and Conclusion
We have dened an embedding G of I to f0; 1g !
?;1 based on Gray code, and introduced
an indeterministic multihead Type 2 machine as a machine which can input/output
sequences in f0; 1g !
?;1 . Since G is a topological embedding of I into f0; 1g !
?;1 , our IM2-
machines are operating on a topological space which includes I as a subspace. We
hope that this computational model will propose a new perspective on real number
computation.
In this paper, we only treated the unit open interval I = (0; 1). We discuss here how
this embedding can be extended to the whole real line R. First, by using the rst
digit as the sign bit: 1 if positive, 0 if negative, and ? if the number is zero, we can
extend it to the interval ( 1; 1). We can also extend it to (
by assuming that there is a decimal point after the k-th digit. However, there seems
to be no direct extension to all of the real numbers without losing injectivity and
without losing the simplicity of the algorithms in Section 7 and 8.
One possibility is to use some computable embedding of R into ( 1; 1), such as the
function arctan(x)=. It is known that this function is computable, and
therefore, we have IM2-machines which convert between the signed digit representation
of x 2 R and the Gray code of f(x) in ( 1; 1). Therefore, we can dene our
new representation as G 0 R). It is clear that this representation
embeds R into !
?;1 , and all the properties we have shown in Section 4 to 6 hold if
we replace I with R and G with G 0 . In particular, the computability notion on R
induced by G 0 and IM2-machines is equivalent to the one induced by the signed digit
representation and Type-2 machiens. However, we will lose the symmetricity of the
Gray code expansion and simplicity of the algorithms in Section 7 and 8.
Another possibility is to introduce the character \." indicating the decimal point into
the sequence. In order to allow an expression starting with ? (i.e. integers of the
need to consider an expression starting with 0 because it should
be allowed to ll the ? with 0 or 1 afterwards. Thus, we lose the injectivity of the
expansion because we have 1:xs. We also have the same kind of di-culty if
we adopt the
oating-point-like expression: a pair of a number indicating the decimal
point and a Gray code on ( 1; 1).
Although this expansion becomes redundant, the redundancy introduced here by
preceding zeros is limited in that we only need at most one zero at the beginning
of each representation and thus each number has at most two names. As is shown
in [BH00], we need innitely many names to innitely many real numbers if we
use representations equivalent to the signed binary representation. Therefore, the
redundancy we need for this extension is essentially smaller than that of the signed
binary representation.
Finally, we show some experimental implementations we currently have. As we have
noted, though we can express the behavior of an IM2-machine using the syntax of a
functional language Haskell, the program comes to have dierent semantics under the
usual lazy evaluation strategy. We have implemented this Gray code input/output
mechanism using logic programming languages. We have written gtos, stog, and the
addition function pl of Section 8 using KL1 [UC90], a concurrent logic programming
language based on Guarded Horn Clauses. We have also implemented them using
the coroutine facility of SICStus Prolog. We are also interested in extending lazy
functional languages so that programs in Section 7 and 8 become executable. The
details about these implementations are given in [Tsu00].
Acknowledgements
The author thanks Andreas Knobel for many interesting and illuminating discus-
sions. He also thanks Mariko Yasugi, Hiroyasu Kamo, and Izumi Takeuchi for many
discussions.
--R
A data structure based on gray code encoding for graphics and image processing.
Exact real arithmetic: A case study in higher order programming.
Topological properties of real number representations.
Recursive and Computable Operations over Topological Structures.
The New Turing Omnibus.
A new representation for exact real numbers.
Real number computability and domain theory.
A golden ratio notation for the real numbers.
An abstract data type for real numbers.
On the de
Haskell report.
The takagi function and its generalization.
Computability in Analysis and Physics.
Computation over abstract structures: serial and parallel procedures.
Implementation of indeterministic multihead type 2 machines with ghc for real number computations.
On computable real numbers
Design of the kernel language for the parallel inference machine.
Exact real computer arithmetic with continued fractions.
Type 2 recursion theory.
An introduction to computable analysis.
--TR
Exact Real Computer Arithmetic with Continued Fractions
Design of the kernel language for the parallel inference machine
Topology
Real number computability and domain theory
A domain-theoretic approach to computability on the real line
An abstract data type for real numbers
Exact real arithmetic: a case study in higher order programming
Computable analysis
Topological properties of real number representations
A golden ratio notation for the real numbers
--CTR
Hideki Tsuiki, Compact metric spaces as minimal-limit sets in domains of bottomed sequences, Mathematical Structures in Computer Science, v.14 n.6, p.853-878, December 2004 | multihead;gray code;indeterminism;IM2-machines;real number computation |
611399 | New and faster filters for multiple approximate string matching. | We present three new algorithms for on-line multiple string matching allowing errors. These are extensions of previous algorithms that search for a single pattern. The average running time achieved is in all cases linear in the text size for moderate error level, pattern length, and number of patterns. They adapt (with higher costs) to the other cases. However, the algorithms differ in speed and thresholds of usefulness. We theoretically analyze when each algorithm should be used, and show their performance experimentally. The only previous solution for this problem allows only one error. Our algorithms are the first to allow more errors, and are faster than previous work for a moderate number of patterns (e.g. less than 50-100 on English text, depending on the pattern length). | Introduction
Approximate string matching is one of the main problems in classical string algorithms, with
applications to text searching, computational biology, pattern recognition, etc. Given a text T 1::n
of length n and a pattern P 1::m of length m (both sequences over an alphabet \Sigma of size oe), and a
maximal number of errors allowed, m, we want to find all text positions where the pattern
matches the text with up to k errors. Errors can be substituting, deleting or inserting a character.
We use the term "error level" to refer to
In this paper we are interested in the on-line problem (i.e. the text is not known in advance),
where the classical solution for a single pattern is based on dynamic programming and has a running
time of O(mn) [26].
In recent years several algorithms have improved the classical one [22]. Some improve the worst
or average case by using the properties of the dynamic programming matrix [30, 11, 16, 31, 9].
Others filter the text to quickly eliminate uninteresting parts [29, 28, 10, 14, 24], some of them
being "sublinear" on average for moderate ff (i.e. they do not inspect all the text characters).
Yet other approaches use bit-parallelism [3] in a computer word of w bits to reduce the number of
operations [33, 35, 34, 6, 19].
The problem of approximately searching a set of r patterns (i.e. the occurrences of anyone of
them) has been considered only recently. This problem has many applications, for instance
This work has been supported in part by FONDECYT grant 1990627.
ffl Spelling: many incorrect words can be searched in the dictionary at a time, in order to find
their most likely variants. Moreover, we may even search the dictionary of correct words in
the "text" of misspelled words, hopefully at much less cost.
ffl Information retrieval: when synonym or thesaurus expansion is done on a keyword and the
text is error-prone, we may want to search all the variants allowing errors.
Batched queries: if a system receives a number of queries to process, it may improve efficiency
by searching all them in a single pass.
ffl Single-pattern queries: some algorithms for a single pattern allowing errors (e.g. pattern
partitioning [6]) reduce the problem to the search of many subpatterns allowing less errors,
and they benefit from multipattern search algorithms.
A trivial solution to the multipattern search problem is to perform r searches. As far as we
know, the only previous attempt to improve the trivial solution is due to Muth & Manber [17], who
use hashing to search many patterns with one error, being efficient even for one thousand patterns.
In this work, we present three new algorithms that are extensions of previous ones to the
case of multiple search. In Section 2 we explain some basic concepts necessary to understand
the algorithms. Then we present the three new techniques. In Section 3 we present "automaton
which extends a bit-parallel simulation of a nondeterministic finite automaton
In Section 4 we present "exact partitioning", that extends a filter based on exact
searching of pattern pieces [7, 6, 24]. In Section 5 we present "counting", based on counting
pattern letters in a text window [14]. In Section 6 we analyze our algorithms and in Section 7 we
compare them experimentally. Finally, in Section 8 we give our conclusions. Some detailed analyses
are left for Appendices A and B.
Although [17] allows searching for many patterns, it is limited to only one error. Ours are the
first algorithms for multipattern matching allowing more than one error. Moreover, even for one
error, we improve [17] when the number of patterns is not very large (say, less than 50-100 on
English text, depending on the pattern length). Our multipattern extensions improve over their
sequential counterparts (i.e. one separate search per pattern using the base algorithm) when the
error level is not very high (about ff 0:4 on English text). The filter based on exact searching
is the fastest for small error levels, while the bit-parallel simulation of the NFA adapts better to
more errors on relatively short patterns.
Previous partial and preliminary versions of this work appeared in [5, 20, 21].
2 Basic Concepts
We review in this section some basic concepts that are used in all the algorithms that follow. In
the paper S i denotes the i-th character of string S (being S 1 the first character), and S i::j stands
for the substring S i S i+1 :::S j . In particular, if ffl, the empty string.
2.1 Filtering Techiques
All the multipattern search algorithms that we consider in this work are based in the concept of
filtering, and therefore it is useful to define it here.
Filtering is based on the fact that it is normally easier to tell that a text position does not
match than to ensure that it matches. Therefore, a filter is a fast algorithm that checks for a
simple necessary (though not sufficient) condition for an approximate match to occur. The text
areas that do not satisfy the necessary condition can be safely discarded, and a more expensive
algorithm has to be run on the text areas that passed the filter.
Since the filters can be much faster than approximate searching algorithms, filtering algorithms
can be very competitive (in fact, they dominate on a large range of parameters). The performance
of filtering algorithms, however, is very sensitive to the error level ff. Most filters work very well
on low error levels and very bad with more errors. This is related with the amount of text that the
filter is able to discard. When evaluating filtering algorithms, it is important not only to consider
time efficiency but also their tolerance to errors.
A term normally used when referring to filters is "sublinearity". It is said that a filter is sublinear
when it does not inspect all the characters of the text (like the Boyer-Moore [8] algorithms for exact
searching, which can be at best O(n=m)).
Throughout this work we make use of the two following lemmas to derive filtering conditions.
with at most k errors, and concatenation of
sub-patterns), then some substring of S matches at least one of the p i 's, with at most bk=jc errors.
Proof: Otherwise, the best match of each p i inside S has at least bk=jc errors. An
occurrence of P involves the occurrence of each of the p i 's, and the total number of errors in the
occurrences is at least the sum of the errors of the pieces. But here, just summing up the errors of
all the pieces we have more than errors and therefore a complete match is not possible.
Notice that this does not even consider that the matches of the p i must be in the proper order, be
disjoint, and that some deletions in S may be needed to connect them.
In general, one can filter the search for a pattern of length m with k errors by the search of j
subpatterns of length m=j with k=j errors. Only the text areas surrounding occurrences of pieces
must be checked for complete matches.
An important particular case of Lemma 1 arises when one considers since in this case
some pattern piece appears unaltered (zero errors).
Lemma 2: [32] If there are i j such that ed(T i::j ; P ) k, then T j \Gammam+1::j includes at least m \Gamma k
characters of P .
Proof: Suppose the opposite. If then we observe that there are less than
characters of P in T i::j . Hence, more than k characters must be deleted from P to match the text.
we observe that there are more than k characters in T i::j that are not in P , and hence
we must insert more than k characters in P to match the text. A contradiction in both cases.
Note that in case of repeated characters in the pattern, they must be counted as different
occurrences. For example, if we search "aaaa" with one error in the text, the last four letters of
each occurrence must include at least three a's.
simplification of that in [32]) says essentially that we can design a filter for approximate
searching based on finding enough characters of the pattern in a text window (without
regarding their ordering). For instance, the pattern "survey" cannot appear with one error in the
text window "surger" because there are not five letters of the pattern in the text. However, the
filter cannot discard the possibility that the pattern appears in the text window "yevrus".
2.2 Bit-Parallelism
Bit-parallelism is a technique of common use in string matching [3]. It was first proposed in [2, 4].
The technique consists in taking advantage of the intrinsic parallelism of the bit operations inside
a computer word. By using cleverly this fact, the number of operations that an algorithm performs
can be cut down by a factor of at most w, where w is the number of bits in the computer word.
Since in current architectures w is 32 or 64, the speedup is very significant in practice (and
improves with technological progress). In order to relate the behavior of bit-parallel algorithms
to other works, it is normally assumed that dictated by the RAM model of
computation. We prefer, however, to keep w as an independent value.
Some notation we use for bit-parallel algorithms is in order. We denote as b ' :::b 1 the bits of a
mask of length ', which is stored somewhere inside the computer word. We use C-like syntax for
operations on the bits of computer words, e.g. "j" is the bitwise-or and "!!" moves the bits to
the left and enters zeros from the right, e.g. b m b We can also
perform arithmetic operations on the bits, such as addition and subtraction, which operates the
bits as if they formed a number. For instance, b ' :::b x
We explain now the first bit-parallel algorithm, since it is the basis of much of which follows
in this work. The algorithm searches a pattern in a text (without errors) by parallelizing the
operation of a non-deterministic finite automaton that looks for the pattern. Figure 1 illustrates
this automaton.
l
a h a
Figure
1: Nondeterministic automaton that searches "aloha" exactly.
This automaton has m+ 1 states, and can be simulated in its non-deterministic form in O(mn)
time. The Shift-Or algorithm achieves O(mn=w) worst-case time (i.e. optimal speedup). Notice
that if we convert the non-deterministic automaton to a deterministic one to have O(n) search
time, we get an improved version of the KMP algorithm [15]. However, KMP is twice as slow for
The algorithm first builds a table B[ ] which for each character c stores a bit mask
The mask B[c] has the bit b i in zero if and only if P c. The state of the search is kept in a
machine word matches the end of the text read up to
now (i.e. the state numbered i in Figure 1 is active). Therefore, a match is reported whenever dm
is zero.
D is set to all ones originally, and for each new text character T j , D is updated using the formula
The formula is correct because the i-th bit is zero if and only if the (i \Gamma 1)-th bit was zero
for the previous text character and the new text character matches the pattern at position i. In
other words, T j It is possible to
relate this formula to the movement that occurs in the non-deterministic automaton for each new
text character: each state gets the value of the previous state, but this happens only if the text
character matches the corresponding arrow.
For patterns longer than the computer word (i.e. m ? w), the algorithm uses dm=we computer
words for the simulation (not all them are active all the time). The algorithm is O(mn=w) worst
case time, and the preprocessing is O(m On average, the algorithm is
O(n) even when m ? w, since only the first O(1) states of the automaton have active states on
average (and hence the first O(1) computer words need to be updated on average).
It is easy to extend Shift-Or to handle classes of characters. In this extension, each position
in the pattern matches with a set of characters rather than with a single character. The classical
string matching algorithms are not so easily extended. In Shift-Or, it is enough to set the i-th bit
of B[c] for every c 2 P i (P i is a set now). For instance, to search for "survey" in case-insensitive
form, we just set the first bit of B["s"] and of B["S"] to "match" (zero), and the same with the rest.
Shift-Or can also search for multiple patterns (where the complexity is O(mn=w) if we consider
that m is the total length of all the patterns) by arranging many masks B and D in the same
machine word. Shift-Or was later enhanced [34] to support a larger set of extended patterns and
even regular expressions. Recently, in [25], Shift-Or was combined with a sublinear string matching
algorithm, obtaining the same flexibility and an efficiency competitive against the best classical
algorithms.
Many on-line text algorithms can be seen as implementations of clever automata (classically, in
their deterministic form). Bit-parallelism has since its invention became a general way to simulate
simple non-deterministic automata instead of converting them to deterministic. It has the advantage
of being much simpler, in many cases faster (since it makes better usage of the registers of the
computer word), and easier to extend to handle complex patterns than its classical counterparts.
Its main disadvantage is the limitations it imposes with regard to the size of the computer word.
In many cases its adaptations to cope with longer patterns are not so efficient.
2.3 Bit-parallelism for Approximate Pattern Matching
We present now an application of bit-parallelism to approximate pattern matching, which is especially
relevant for the present work.
Consider the NFA for searching "patt" with at most errors shown in Figure 2. Every
row denotes the number of errors seen. The first one 0, the second one 1, and so on. Every column
represents matching the pattern up to a given position. At each iteration, a new text character is
considered and the automaton changes its states. Horizontal arrows represent matching a character
(they can only be followed if the corresponding match occurs). All the others represent errors, as
they move to the next row. Vertical arrows represent inserting a character in the pattern (since they
advance in the text and not in the pattern), solid diagonal arrows represent replacing a character
(since they advance in the text and the pattern), and dashed diagonal arrows represent deleting a
character of the pattern (since, as ffl-transitions, they advance in the pattern but not in the text).
The loop at the initial state allows considering any character as a potential starting point of a
match. The automaton accepts a character (as the end of a match) whenever a rightmost state
is active. Initially, the active states at row i (i 2 0::k) are those at the columns from 0 to i, to
represent the deletion of the first i characters of the pattern P 1::m .
a
a
a
no errors
Figure
2: An NFA for approximate string matching. We show the active states after reading the
text "pait".
An interesting application of bit-parallelism is to simulate this automaton in its nondeterministic
form. A first approach [34] obtained O(kdm=wen) time, by packing each automaton row in a
machine word and extending the Shift-Or algorithm to account for the vertical and diagonal arrows.
Note that even if all the states fit in a single machine word, the k have to be sequentially
updated because of the ffl-transitions. The same happens in the classical dynamic programming
algorithm [26], which can be regarded as a column-wise simulation of this NFA.
In this paper we are interested in a more recent simulation technique [6], where we show that by
packing diagonals of the automaton instead of rows or columns all the new values can be computed
in one step if they fit in a computer word. We give a brief description of the idea.
Because of the ffl-transitions, once a state in a diagonal is active, all the subsequent states in that
diagonal become active too, so we can define the minimal active row of each diagonal, D i (diagonals
are numbered by looking the column they start at, e.g. D 1 and D 2 are enclosed in dotted lines
in
Figure
2). The new values for D i (i after we read a new text character c can be
computed by
where it always holds
and we report a match whenever Dm\Gammak k. The formula for
accounts for replacements, insertions and matches, respectively. Deletions are accounted for by
keeping the minimum active row. All the interesting matches are caught by considering only the
diagonals D 1 :::D m\Gammak .
We use bit-parallelism to represent the D i 's in unary. Each one is hold in k (plus an
overflow bit) and stored sequentially inside a bit mask D. Interestingly, the effect is the same if we
read the diagonals bottom-up and exchange 0 $ 1, with each bit representing a state of the NFA.
The update formula can be seen either as an arithmetic implementation of the previous formula in
unary or as a logical simulation of the flow of bits across the arrows of the NFA.
As in Shift-Or, a table of (m bits long) masks b[ ] is built representing match or mismatch
against the pattern. A table B[c] is built by mapping the bits of b[ ] to their appropriate positions
inside D. Figure 3 shows how the states are represented inside the masks D and B.
separator separator
final state
t a p
t t a
Figure
3: Bit-parallel representation of the NFA of Figure 2.
This representation requires k+2 bits per diagonal, so the total number of bits is (m \Gamma k)(k+2).
If this number of bits does not exceed the computer word size w, the update can be done in O(1)
operations. The resulting algorithm is linear and very fast in practice.
For our purposes, it is important to realize that the only connection between the pattern and
the algorithm is given by the b[ ] table, and that the pattern can use classes of characters just as in
the Shift-Or algorithm. We use this property next to search for multiple patterns.
3 Superimposed Automata
In this section we describe an approach based on the bit-parallel simulation of the NFA just described
Suppose we have to search r patterns . We are interested in the occurrences of any one
of them, with at most k errors. We can extend the previous bit-parallelism approach by building
the automaton for each one, and then "superimpose" all the automata.
Assume that all patterns have the same length (otherwise, truncate them to the shortest one).
Hence, all the automata have the same structure, differing only in the labels of the horizontal
arrows.
The superimposition is defined as follows: we build the b[ ] table for each pattern, and then take
the bitwise-and of all the tables (recall that 0 means match and 1 means mismatch). The resulting
table matches at position i with the i-th character of any of the patterns. We then build the
automaton as before using this table.
The resulting automaton accepts a text position if it ends an occurrence of a much more relaxed
pattern with classes of characters, namely
for example, if the search is for "patt" and "wait", as shown in Figure 4, the string "pait" is
accepted with zero errors.
or w
no errors
errors
a t
a t
t or i
a t
t or i
t or i
or w
or w
Figure
4: An NFA to filter the search for "patt" and "wait".
For a moderate number of patterns, the filter is strict enough at the same cost of a single search.
Each occurrence reported by the automaton has to be verified for all the involved patterns (we use
the single-pattern automaton for this step). That is, we have to retraverse the last m+
characters to determine if there is actually an occurrence of some of the patterns.
If the number of patterns is too large, the filter will be too relaxed and will trigger too many
verifications. In that case, we partition the set of patterns into groups of r 0 patterns each, build
the automaton of each group and perform dr=r 0 e independent searches. The cost of this search
is O(r=r 0 n), where r 0 is small enough to make the cost of verifications negligible. This r 0 always
exists, since for r we have a single pattern per automaton and no verification is needed.
When grouping, we use the heuristic of sorting the patterns and packing neighbors in the same
group, trying to have the same first characters.
3.1 Hierarchical Verification
The simplest verification alternative (which we call "plain") is that, once a superimposed automaton
reports a match, we try the individual patterns one by one in the candidate area. However, a smarter
verification technique (which we call hierarchical) is possible.
Assume first that r is a power of two. Then, when the automaton reports a match, run two
new automata over the candidate area: one which superimposes the first half of the patterns and
another with the second half. Repeat the process recursively with each of the two automata that
finds again a match. At the end, the automata will represent single patterns and if they find a
match we know that their patterns have been really found (see Figure 5). Of course the automata
for the required subsets of patterns are all preprocessed. Since they correspond to the internal
nodes of a binary tree of r leaves, they are so the space and preprocessing cost does
not change. If r is not a power of two then one of the halves may have one more pattern than the
other.2424
Figure
5: The hierarchical verification method for 4 patterns. Each node of the tree represents a
check (the root represents in fact the global filter). If a node passes the check, its two children are
tested. If a leaf passes the check, its pattern has been found.
The advantage of hierarchical verification is that it can remove a number of candidates from
consideration in a single test. Moreover, it can even find that no pattern has really matched before
actually checking any specific pattern (i.e. it may happen that none of the two halves match
in a spurious match of the whole group). The worst-case overhead over plain verification is just
a constant factor, that is, twice as many tests over the candidate area of r). On
average, as we show later analytically and experimentally, hierarchical verification is by far superior
to plain verification.
3.2 Automaton Partitioning
Up to now we have considered short patterns, whose NFA fit into a computer word. If this is not
the case (i.e. (m we partition the problem. In this subsection and the next we
adapt the two partitioning techniques described in [6].
The simplest technique to cope with a large automaton is to use a number of machine words
for the simulation. The idea is as follows: once the (large) automata have been superimposed,
we partition the superimposed automaton into a matrix of subautomata, each one fitting in a
computer word. Those subautomata behave slightly differently than the simple one, since they
must propagate bits to their neighbors. Figure 6 illustrates.
Once the automaton is partitioned, we run it over the text updating its subautomata. Each
step takes time proportional to the number of cells to update, i.e. O(k(m \Gamma k)=w). Observe,
however, that it is not necessary to update all the subautomata, since those on the right may not
have any active state. Following [31], we keep track of up to where we need to update the matrix
of subautomata, working only on the "active" cells.
Information flow
Affected area000000000111111111111111111000111111 I rows
J columns
c
r
Figure
large NFA partitioned into a matrix of I \Theta J computer words, satisfying (' r +1)' c w.
3.3 Pattern Partitioning
This technique is based on Lemma 1 of Section 2.1. We can reduce the size of the problem if
we divide the pattern in j parts, provided we search all the sub-patterns with bk=jc errors. Each
match of a sub-pattern must be verified to determine if it is in fact a complete match.
To perform the partition, we pick the smallest j such that the problem fits in a single computer
word (i.e. (dm=je w). The limit of this method is reached for
since in that case we search with zero errors. The algorithm for this case is qualitatively different
and is described in Section 4.
We divide each pattern in j subpatterns as evenly as possible. Once we partition all the r
patterns, we are left with j \Theta r subpatterns to be searched with bk=jc errors. We simply group
them as if they were independent patterns to search with the general method. The only difference
is that, after determining that a subpattern has appeared, we have to verify its complete pattern.
Another kind of hierarchical verification, which we call "hierarchical piece verification", is applied
in this case too. As shown in [23, 24], the single-pattern algorithm can verify hierarchically
whether the complete pattern matches given that a piece matches (see Figure 7). That is, instead
of checking the complete pattern we check the concatenation of two pieces containing the one
that matched, and if it matches then we check the concatenation of four pieces, and so on. This
works because Lemma 1 applies at each level of the tree of Figure 7. The method is orthogonal to
our hierarchical verification idea because hierarchical piece verification works bottom-up instead of
top-down and operates on pieces of the pattern rather than on sets of patterns.
As we are using our hierarchical verification on the sets of pattern pieces to determine which
piece matched given that a superimposition of them matched, we are coupling two different hierarchical
verification techniques in this case: we first use our new mechanism to determine which piece
matched from the superimposed group and then use hierarchical piece verification to determine the
occurrence of the complete pattern the piece belongs to. Figure 8 illustrates the whole process.
aaabbbcccddd
aaabbb cccddd
ccc ddd
bbb
aaa
Figure
7: The hierarchical piece verification method for a pattern split in 4 parts. The boxes
(leaves) are the elements which are actually searched, and the root represents the whole pattern.
At least one pattern at each level must match in any occurrence of the complete pattern. If the
bold box is found, all the bold lines may be verified.
p22
p22
p22
each one is
split in 4
3 pieces to search
superimposed groups
the pieces are arranged in
hierarchical verif.
p22 is found
and searched
hierarchical piece verif.
P2 is finally found
Figure
8: The whole process of pattern partitioning with hierarchical verifications.
Partitioning into Exact Searching
This technique (called "exact partitioning" for short) is based on a single-pattern filter which
reduces the problem of approximate searching to a problem of multipattern exact searching. The
algorithm first appeared in [34], and was later improved in [7, 6, 24]. We first present the single-
pattern version and then our extension to multiple patterns.
4.1 A Filter Based on Exact Searching
A particular case of Lemma 1 shows that if a pattern matches a text position with k errors, and we
split the pattern in k+1 pieces, then at least one of the pieces must be present with no errors in each
occurrence (this is a folklore property which has been used several times [34, 18, 12]). Searching
with zero errors leads to a completely different technique.
Since there are efficient algorithms to search for a set of patterns exactly, we partition the
pattern in k similar length), and apply a multipattern exact search for the pieces.
Each occurrence of a piece is verified to check if it is surrounded by a complete match. If there are
not too many verifications, this algorithm is extremely fast.
From the many algorithms for multipattern search, an extension of Sunday's algorithm [27] gave
us the best results. We build a trie with the sub-patterns. From each text position we search the
text characters into the trie, until a leaf is found (match) or there is no path to follow (mismatch).
The jump to the next text position is precomputed as the minimum of the jumps allowed in each
sub-pattern by the Sunday algorithm.
As in [24], we use the same technique for hierarchical piece verification of a single pattern
presented in Section 3.3.
4.2 Searching Multiple Patterns
Observe that we can easily add more patterns to this scheme. Suppose we have to search for r
patterns We cut each one into search in parallel for all the r(k
pieces. When a piece is found in the text, we use a classical algorithm to verify its pattern in the
candidate area.
Note an important difference with superimposed automata. In this multipattern search we know
which piece has matched. This is not the case in superimposed automata, where not only we do
not know which piece matched, but it is even possible that no piece has really matched. The work
to determine which is the matching piece (carried out by hierarchical verification in superimposed
automata) is not necessary here. Moreover, we only detect real matches, so there are no more
matches in the union of patterns than the sum of the individual matches.
Therefore, there is no point in separating the search for the r(k in groups. The
only reason to superimpose less patterns is that the shifts of the Sunday algorithm are reduced as
the number of patterns grow, but as we show in the experiments, this never justifies in practice
splitting one search into two.
5 A Counting Filter
We present now a filter based on counting letters in common between the pattern and a text window.
This filter was first presented in [14] (a simple variant of [13]), but we use a slightly different version
here. Our variant uses a fixed-size instead of variable-size text window (a possibility already noted
in [32]), which makes it better suited for parallelization. We first explain the single-pattern filter
and then extend it to handle many patterns using bit-parallelism.
5.1 A Simple Counter
This filter is based in Lemma 2 of Section 2.1. It passes over the text examining an m-letters
long window. It keeps track of how many characters of P are present in the current text window
(accounting for multiplicities too). If, at a given text position j, more characters of P are
in the window T j \Gammam+1::j , the window area is verified with a classical algorithm.
We implement the filtering algorithm as follows. We keep a counter count of pattern characters
appearing in the text window. We also keep a table A[ ] where, initially, the number of times that
each character c appears in P is kept in A[c]. Throughout the algorithm, each entry A[c] indicates
how many occurrences of c can still be taken as belonging to P . For example, if 'h' appears once
in P , we count only one of the 'h's of the text window as belonging to P . When A[c] is negative,
it means that c must exit the text window \GammaA[c] times before we take it again as belonging to P .
For example, if we run the pattern "aloha" over the text "aaaaaaaa", it will hold
and the value of the counter will be 2. This is independent on k.
To advance the window, we must include the new character T j+1 and exclude the last character,
To include the new character, we subtract one from A[T j+1 ]. If it was greater than zero
before being decremented, it is because the new character T j+1 is in P , so we increment count. To
exclude the old character T j \Gammam+1 , we add one to A[T j \Gammam+1 ]. If its is greater than zero after being
incremented, it is because T j \Gammam+1 was considered to be in P , so we decrement count. Whenever
count reaches we verify the preceding area.
As can be seen, the algorithm is not only linear (excluding verifications), but the number of
operations per character is very small.
5.2 Keeping Many Counters in Parallel
To search r patterns in the same text, we use bit-parallelism to keep all the counters in a single
machine word. We must do that for the A[ ] table and for count.
The values of the entries of A[ ] lie in the range [\Gammam::m], so we need exactly
1)e bits to store them. This is also enough for count, since it is in the range [0::m]. Hence, we can
pack patterns of length m in a single search (recall that w
is the number of bits in the computer word). If the patterns have different lengths, we can either
truncate them to the shortest length or use a window size of the longest length. If we have more
patterns, we must divide the set in subsets of maximal size and search each subset separately. We
focus our attention on a single subset now.
The algorithm simulates the simple one as follows. We have a table MA[ ] that packs all the
tables. Each entry of MA[ ] is divided in bit areas of length ' + 1. In the area of the machine
word corresponding to each pattern, we store its normal A[ ] value, set to 1 the most significant bit
of the area, and subtract 1 (i.e. we store in the algorithm, we have to add or
subtract 1 to all A[ ]'s, we can easily do it in parallel without causing overflow from an area to the
next. Moreover, the corresponding A[ ] value is not positive if and only if the most significant bit
of the area is zero.
We have also a parallel counter Mcount, where the areas are aligned with MA[ ]. It is initialized
by setting to 1 the most significant bit of each area and then subtracting at each one, i.e. we
store Later, we can add or subtract 1 in parallel without causing overflow. Moreover,
the window must be verified for a pattern whenever the most significant bit of its area reaches 1.
The condition can be checked in parallel, but when some of the most significant bits reach 1, we
need to sequentially check which one it was.
Finally, observe that the counters that we want to selectively increment or decrement correspond
exactly to the MA[ ] areas that have a 1 in their most significant bit (i.e. those whose A[ ] value
is positive). This allows an obvious bit mask-shift-add mechanism to perform this operation in
parallel on all the counters. Figure 9 illustrates.
Mcount
MA [a]
MA [l]
MA [o]
MA [h]
MA [e]
count m\Gammak ? (false)
Figure
9: The bit-parallel counters. The example corresponds to the pattern "aloha" searched with
1 error and the text window "hello". The A values are A[ 0 a
6 Analysis
We are interested in the complexity of the presented algorithms, as well as in the restrictions that
ff and r must satisfy for each mechanism to be efficient in filtering most of the unrelevant part of
the text.
To this effect, we define two concepts. First, we say that a multipattern search algorithm is
optimal if it searches r patterns in the same time it takes to search one pattern. If we call C n;r the
cost to search r patterns in a text of size n, then an algorithm is optimal if C
we say that a multipattern search algorithm is useful if it searches r patterns in less than the time
it takes to search them one by one with the corresponding sequential algorithm, i.e. C n;r ! r C n;1 .
As we work with filters, we are interested in the average case analysis, since in the worst case none
is useful.
We compare in Table 1 the complexities and limits of applicability of all the algorithms. Muth
& Manber are included for completeness. The analysis leading to these results is presented later in
this section.
Algorithm Complexity Optimality Usefulness
Simple Superimp. r
oe
Automaton Part. ffm 2 r
oe
Pattern Part. mr
oe
w(1\Gammaff)
Part. Exact Search
ffoe 1=ff
log oe (rm)+\Theta(log oe log oe (rm))
log oe m+\Theta(log oe log oe m)
Counting r log m
Muth & Manber mn
Table
1: Complexity, optimality and limit of applicability for the different algorithms.
We present in Figure 10 a schematical representation of the areas where each algorithm is the
best in terms of complexity. We show later how the experiments match those figures.
Exact partitioning is the fastest choice in most reasonable scenarios, for the error levels where
it can be applied. First, it is faster than counting for m= log m ! ffoe 1=ff =w, which does not
hold asymptotically but holds in practice for reasonable values of m. Second, it is faster
than superimposing automata for min(
which is true in most
practical cases.
ffl The only algorithm which can be faster than exact partitioning is that of Muth & Manber
[17], namely for r ? ffoe 1=ff . However, it is limited to
ffl For increasing m, counting is asymptotically the fastest algorithm since its cost grows as
O(log m) instead of O(m) thanks to its optimal use of the bits of the computer word. However,
its applicability is reduced as m grows, being useless at the point where it wins over exact
partitioning.
ffl When the error level is too high for exact partitioning, superimposing automata is the only
remaining alternative. Automaton partitioning is better for m
while pattern partitioning
is asymptotically better. Both algorithms have the same limit of usefulness, and for
higher error levels no filter can improve over a sequential search.
Pattern
Partitioning
Partitioning
Automaton
Partitioning into Exact Search
oe
1= log oe m
ff
oe
1= log oe m
Partitioning into Exact Search
Superimposed Automata
r
Muth-Manber
ffoe 1=ff
ff
Figure
10: The areas where each algorithm is better, in terms of ff, m and r. In the left plot
(varying m), we have assumed a moderate r (i.e. less than 50).
6.1 Superimposed Automata
Suppose that we search r patterns. As explained before, we can partition the set in groups of r 0
patterns each, and search each group separately (with its r 0 automata superimposed). The size of
the groups should be as large as possible, but small enough for the verifications to be not significant.
We analyze which is the optimal value for r 0 and which is the complexity of the search.
In [6] we prove that the probability of a given text position matching a random pattern with
error level ff is O(fl m ), where It is also proved that
oe, and experimentally shown that this holds very precisely in practice if we replace e
by 1.09. In fact, a very abrupt phenomenon occurs, since the matching probability is very low for
oe and very high otherwise.
In this formula, 1=oe stands for the probability of a character crossing a horizontal edge of the
automaton (i.e. the probability of two characters being equal). To extend this result, we note that
we have r 0 characters on each edge now, so the above mentioned probability is
which is smaller than r 0 =oe. We use this upper bound as a pessimistic approximation (which stands
for the case of all the r 0 characters being different, and is tight for r 0 !! oe).
As the single-pattern algorithm is O(n) time, the multipattern algorithm is optimal on average
whenever the total cost of verifications is O(1) per character. Since each verification costs O(m)
(because we use a linear-time algorithm on an area of length O(m)), we need that the
total number of verifications performed is O(1=m) per character, on average. If we used the plain
verification scheme, this would mean that the probability that a superimposed automaton matches
a text position should be O(1=(mr)), as we have to perform r verifications.
If hierarchical verification is not used we have that, as r increases, matching becomes more
probable (because it is easier to cross a horizontal edge of the automaton) and it costs more
(because we have to check the r patterns one by one). This results in two different limits on the
maximum allowable r, one for each of the two facts just stated. The limit due to the increased cost
of each verification is more stringent than that of increased matching probability.
The resulting analysis without hierarchical verification is very complex and is omitted here
because hierarchical verification yields considerably better results and a simpler analysis. As we
show in Appendix A, the average cost to verify a match of the superimposed automaton is O(m)
when hierarchical verification is used, instead of the O(rm) cost of plain verification. That is, the
cost does not grow as the number of patterns increases.
Hence, the only limit that prevents us from superimposing all the r patterns is that the matching
probability becomes higher. That is, if ff
r=oe, then the matching probability is too high
and we will spend too much time verifying almost all text positions. On the other hand, we can
superimpose as much as we like before that limit is reached. This tells that the best r (which we
call r ) is the maximum one not reaching the limit, i.e.
r
(1)
Since we partition in sets small enough to make the verifications not significant, the cost is
simply O(r=r n)
This means that the algorithm is optimal for (taking the error level as a constant), or
alternatively ff
r=oe. On the other hand, for
oe, the cost is O(rn), not better
than the trivial solution (i.e. r hence no superimposition occurs and the algorithm is not
useful). Figure 11 illustrates.
Automaton Partitioning: the analysis for this case is similar to the simple one, except because
each step of the large automaton takes time proportional to the total number of subautomata, i.e.
rts
pr
oe
r ff
r
oe
Figure
11: Behavior of superimposed automata. On the left, the cost increases linearly with r, with
slope depending on ff. On the right, the cost of a parallel search (t p ) approaches r single searches
In fact, this is a worst case since on average not all cells are active, but we use
the worst case because we superimpose all the patterns we can until the worst case of the search is
almost reached. Therefore, the cost formula is
rn
This is optimal for constant ff), or alternatively for ff
r=oe. It is useful
for ff
oe.
Pattern Partitioning: we have now jr patterns to search with bk=jc errors. The error level is
the same for subproblems (recall that the subpatterns are of length m=j).
To determine which piece matched from the superimposed group, we pay O(m) independently
of the number of pieces superimposed (thanks to the hierarchical verification). Hence the limit for
our grouping is given by Eq. (1). In both the superimposed and in the single-pattern algorithm,
we also pay to verify if the match of the piece is part of a complete match. As we show in [23], this
cost is negligible for ff
oe, which is less strict than the limit given by Eq. (1).
As we have jr pieces to search, we need an analytical expression for j. Since j is just large
enough so that the subpatterns fit in a computer word,
where d(w; ff) can be shown to be O(1=
w) by maximizing it in terms of ff (see [23]).
Therefore, the complexity is
oe
rn
On the other hand, the search cost of the single-pattern algorithm is O(jrn). With respect to
the simple algorithm for short patterns, both costs have been multiplied by j, and therefore the
limits for optimality and usefulness are the same.
If we compare the complexities of pattern versus automaton partitioning, we have that pattern
partitioning is better for k ?
w. This means that for constant ff and increasing m, pattern
partitioning is asymptotically better.
6.2 Partitioning into Exact Searching
In [6] we analyze this algorithm as follows. Except for verifications, the search time can be made
O(n) in the worst case by using an Aho-Corasick machine [1], and O(ffn) in the best case if we use
a multipattern Boyer-Moore algorithm. This is because we search pieces of length m=(k+1) 1=ff.
We are interested in analyzing the cost of verifications. Since we cut the pattern in k +1 pieces,
they are of length bm=(k 1)e. The probability of each piece matching is at most
1=oe bm=(k+1)c . Hence, the probability of any piece matching is at most (k
We can easily extend that analysis to the case of multiple search, since we have now r(k
pieces of the same length. Hence, the probability of verifying is r(k We check
the matches using a classical algorithm such as dynamic programming. Note that in this case we
know which pattern to verify, since we know which piece matched. As we show in [23], the total
verification cost if the pieces are of length ' is O(' 2 ) (in our case, Hence, the search
cost is
O
ffoe 1=ff
where the "1" must be changed to "ff" if we consider the best case.
We consider optimality and usefulness now. An optimal algorithm should pay O(n) total search
time, which holds for
The algorithm is always useful, since it searches at the same cost independently on the number
of patterns, and the number of verifications triggered is exactly the same as if we searched each
pattern separately. However, if ff ? 1=(log oe m+ \Theta(log oe log oe m)), then both algorithms (single and
multipattern) work as much as dynamic programming and hence the multipattern search is not
useful. The other case when the algorithm could not be useful is when the shifts of a Boyer-Moore
search are shortened by having many patterns up to the point where it is better to perform separate
searches. This never happens in practice.
6.3 Counting
If the number of verifications is negligible, each pass of the algorithms is O(n). In the case of
multiple patterns, only O(w= log m) patterns can be packed in a single search, so the cost to search
r patterns is O(rn log(m)=w).
The difficult part of the analysis is the maximum error level ff that the filtration scheme can
tolerate while keeping the number of verifications low. We assume that we use dynamic programming
to verify potential matches. We call the probability of verifying. If log(m)=(wm 2 ) the
algorithm keeps linear (i.e. optimal) on average. The algorithm is always useful since the number
of verifications triggered with the multipattern search is the same as for the single-pattern version.
However, if 1=m both algorithms work O(rmn) as for dynamic programming and hence the
filter is not useful.
We derive in Appendix B a pessimistic bound for the limit of optimality and usefulness, namely
grows, we can tolerate smaller error levels. This limit holds
for any condition of the type independently of the constant c. In our case, we need
usefulness.
7 Experimental Results
We experimentally study our algorithms and compare them against previous work. We tested with
megabytes of lower-case English text. The patterns were randomly selected from the same text.
We use a Sun UltraSparc-1 running Solaris 2.5.1, with 64 megabytes of RAM, and Each
data point was obtained by averaging the Unix's user time over 10 trials. We present all the times
in tenths of seconds per megabyte.
We do not present results on random text to avoid an excessively lengthly exposition. In general,
all the filters improve as the alphabet size oe grows. Lower-case English text behaves approximately
as random text with which is the inverse of the probability that two random letters are
equal.
Figure
compares the plain and hierarchical verification methods against a sequential
application of the r searches, for the case of superimposed automata when the automaton fits in a
computer word. We show the cases of increasing r and of increasing k. It is clear that hierarchical
verification outperforms plain verification in all cases. Moreover, the analysis for hierarchical verification
is confirmed since the maximum r up to where the cost of the parallel algorithm does not
grow linearly is very close to r . On the other hand, the algorithm with simple
verification degrades sooner, since the verification cost grows with r.
The mentioned maximum r value is the point where the parallelism ratio is maximized. That
is, if we have to search for 2r patterns, it is better to split them in two groups of size r and search
each group sequentially. To stress this point, Figure 12 (right) shows the quotient between the
parallel and the sequential algorithms, where the optimum is clear for superimposed automata. On
the other hand, the parallelism ratio of exact partitioning keeps improving as r grows, as predicted
by the analysis (there is an optimum for larger m, related to the Sunday shifts, but it still does not
justify to split a search in two).
When we compare our algorithms against the others, we consider only hierarchical verification
and use this r value to obtain the optimal grouping for the superimposed automata algorithms.
The exact partitioning, on the other hand, performs all the searches in a single pass. In counting,
it is clear that the speedup is optimal and we pack as many patterns as we can in a single search.
Notice that the plots which depend on r show the point where r should be selected. Those
which depend on k (for fixed r), on the other hand, just show how the parallelization works as the
error level increases, which cannot be controlled by the algorithm.
We compare now our algorithms among them and against others. We begin with short patterns
whose NFA fit in a computer word. Figure 13 shows the results for increasing r and for increasing
k. For low and moderate error levels, exact partitioning is the fastest algorithm. In particular, it is
faster than previous work [17] when the number of patterns is below 50 (for English text). When
r
r
rts
r
r
rts
rts
Sequential NFA Superimposed, plain verif.
Exact Partitioning Superimposed, hierarchical verif.
Figure
12: Comparison of sequential and multipattern algorithms for 9. The rows correspond
to respectively. The left plots show search time and the right plots show
the ratio between the parallel (t p ) and the sequential time (r \Theta t s ).
the error level increases, superimposed automata is the best choice. This agrees with the analysis.
r
r
Exact Partitioning Superimposed Automata
Counting
Figure
13: Comparison among algorithms for 9. The top plots show increasing r for
3. The bottom plots show increasing k for
We consider longer patterns now 14 shows the results for increasing r and
for increasing k. As before, exact partitioning is the best where it can be applied, and improves
over previous work [17] for r up to 90-100. For these longer patterns the superimposed automata
technique also degrades, and only rarely is it able to improve over exact partitioning. In most cases
it only begins to be the best when it (and all the others) are no longer useful.
Figure
summarizes some of our experimental results, becoming a practical version of the
theoretical Figure 10. The main differences are that exact partitioning is better in practice than
what its complexity suggests, and that there is no clear winner between pattern and automaton
Exact Partitioning Pattern Partitioning Automaton Partitioning
Counting
Figure
14: Comparison among algorithms for 30. The top plots show, for increasing r,
4. The bottom plots show, for increasing k, partitioning is not
run for would resort to exact partitioning.
Partitioning into Exact Search
Superimposed Automata
r
Muth-Manber
aPartitioning into Exact Search
9
Superimposed Automata
Figure
15: The areas where each algorithm is better, in practice, on English text. In the right plot
we assumed 9. Compare with Figure 10.
Conclusions
We have presented a number of different filtering algorithms for multipattern approximate search-
ing. These are the only algorithms that allow an arbitrary number of errors. On the other hand,
the only previous work allows just one error and we have outperformed it when the number of
patterns to search is below 50-100 on English text, depending on the pattern length.
We have explained, analyzed and experimentally tested our algorithms. We have also presented
a map of the best algorithms for each case. Many of the ideas we propose here can be used to
adapt other single-pattern approximate searching algorithms to the case of multipattern searching.
For instance, the idea of superimposing automata can be adapted to most bit-parallel algorithms,
such as [19]. Another fruitful idea is that of exact partitioning, where a multipattern exact search
is easily adapted to search the pieces of many patterns. There are many other filtering algorithms
of the same type, e.g. [28]. On the other hand, other exact multipattern search algorithms may be
better suited to other search parameters (e.g. working better on many patterns).
A number of practical optimizations to our algorithms are possible, for instance
ffl If the patterns have different lengths, we truncate them to the shortest one when superimposing
automata. We can select cleverly the substrings to use, since having the same character
at the same position in two patterns improves the filtering mechanism.
ffl We used simple heuristics to group subpatterns in superimposed automata. These can be
improved to maximize common letters too. A more general technique could group patterns
which are similar in terms of number of errors needed to convert one into the other (i.e. a
clustering technique).
ffl We are free to partition each pattern in k pieces as we like in exact partitioning. This is
used in [24] to minimize the expected number of verifications when the letters of the alphabet
do not have the same probability of occurrence (e.g. in English text). An O(m 3 ) dynamic
programming algorithm is presented there to select the best partition, and this could be
applied to multipattern search.
Acknowledgements
We thank Robert Muth and Udi Manber for their implementation of [17]. We also thank the
anonymous referees for their detailed comments that improved this work.
--R
Efficient string matching: an aid to bibliographic search.
Efficient Text Searching.
Text retrieval: Theory and practice.
A new approach to text searching.
Multiple approximate string matching.
Faster approximate string matching.
Fast and practical approximate pattern matching.
A fast string searching algorithm.
Theoretical and empirical comparisons of approximate string matching algorithms.
Sublinear approximate string matching and biological applications.
An improved algorithm for approximate string matching.
Simple and efficient string matching with k mismatches.
A comparison of approximate string matching algo- rithms
Fast parallel and serial approximate string matching.
Approximate multiple string search.
A sublinear algorithm for approximate keyword searching.
A fast bit-vector algorithm for approximate pattern matching based on dynamic progamming
Multiple approximate string matching by counting.
Approximate Text Searching.
A guided tour to approximate string matching.
Improving an algorithm for approximate pattern matching.
Very fast and simple approximate string matching.
The theory and computation of evolutionary distances: pattern recognition.
A very fast substring search algorithm.
On using q-gram locations in approximate string matching
Approximate Boyer-Moore string matching
Algorithms for approximate string matching.
Finding approximate patterns in strings.
Approximate string matching with q-grams and maximal matches
Approximate string matching using within-word parallelism
Fast text searching allowing errors.
--TR
Algorithms for approximate string matching
Fast parallel and serial approximate string matching
Efficient text searching
A very fast substring search algorithm
An improved algorithm for approximate string matching
A new approach to text searching
Fast text searching
Approximate string-matching with <italic>q</italic>-grams and maximal matches
Approximate Boyer-Moore string matching
Approximate string matching using within-word parallelism
A comparison of approximate string matching algorithms
Very fast and simple approximate string matching
A fast string searching algorithm
Efficient string matching
A guided tour to approximate string matching
Text-Retrieval
Multiple Approximate String Matching
Fast and Practical Approximate String Matching
Theoretical and Empirical Comparisons of Approximate String Matching Algorithms
Approximate Multiple Strings Search
A Bit-Parallel Approach to Suffix Automata
A Fast Bit-Vector Algorithm for Approximate String Matching Based on Dynamic Programming
On Using q-Gram Locations in Approximate String Matching
--CTR
Atsuhiro Takasu, An approximate multi-word matching algorithm for robust document retrieval, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Kimmo Fredriksson, On-line Approximate String Matching in Natural Language, Fundamenta Informaticae, v.72 n.4, p.453-466, December 2006
Kimmo Fredriksson , Gonzalo Navarro, Average-optimal single and multiple approximate string matching, Journal of Experimental Algorithmics (JEA), v.9 n.es, 2004
Josu Kuri , Gonzalo Navarro , Ludovic M, Fast Multipattern Search Algorithms for Intrusion Detection, Fundamenta Informaticae, v.56 n.1-2, p.23-49, January
Josu Kuri , Gonzalo Navarro , Ludovic M, Fast multipattern search algorithms for intrusion detection, Fundamenta Informaticae, v.56 n.1,2, p.23-49, July
Federica Mandreoli , Riccardo Martoglia , Paolo Tiberio, A syntactic approach for searching similarities within sentences, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA | multipattern search;search allowing errors;string matching |
611411 | Web-conscious storage management for web proxies. | Many proxy servers are limited by their file I/O needs. Even when a proxy is configured with sufficient I/O hardware, the file system software often fails to provide the available bandwidth to the proxy processes. Although specialized file systems may offer a significant improvement and overcome these limitations, we believe that user-level disk management on top of industry-standard file systems can offer similar performance advantages. In this paper, we study the overheads associated with file I/O in web proxies, we investigate their underlying causes, and we propose Web-Conscious Storage Management, a set of techniques that exploit the unique reference characteristics of web-page accesses in order to allow web proxies to overcome file I/O limitations. Using realistic trace-driven simulations, we show that these techniques can improve the proxy's secondary storage I/O throughput by a factor of 15 over traditional open-source proxies, enabling a single disk to serve over 400 (URL-get) operations per second. We demonstrate our approach by implementing Foxy, a web proxy which incorporates our techniques. Experimental evaluation suggests that Foxy outperforms traditional proxies, such as SQUID, by more than a factor of four in throughput, without sacrificing response latency. | Introduction
World Wide Web proxies are being increasingly used to provide Internet access to users behind a firewall and
to reduce wide-area network traffic by caching frequently used URLs. Given that web traffic still increases
exponentially, web proxies are one of the major mechanisms to reduce the overall traffic at the core of the Internet,
protect network servers from traffic surges, and improve the end user experience. Today's typical web proxies
usually run on UNIX-like operating systems and their associated file systems. While UNIX-like file systems
are widely available and highly reliable, they often result in poor performance for web proxies. For instance,
Rousskov and Soloviev [35] observed that disk delays in web proxies contribute about 30% toward total hit
response time. Mogul [28] observed that the disk I/O overhead of caching turns out to be much higher than the
latency improvement from cache hits at the web proxy at Digital Palo Alto firewall. Thus, to save the disk I/O
overhead the proxy is typically run in non-caching mode.
These observations should not be surprising, because UNIX-like file-systems are optimized for general-purpose
workloads [27, 30, 21], while web proxies exhibit a distinctly different workload. For example, while
read operations outnumber write operations in traditional UNIX-like file systems [3], web accesses induce a write-
dominated workload [25]. Moreover, while several common files are frequently updated, most URLs are rarely
(if ever at all) updated. In short, traditional UNIX-like file access patterns are different from web access patterns,
and therefore, file systems optimized for UNIX-like workloads do not necessarily perform well for web-induced
workloads.
In this article we study the overheads associated with disk I/O for web proxies, and propose Web-Conscious
Storage Management (WebCoSM), a set of techniques designed specifically for high performance. We show
that the single most important source of overhead is associated with storing each URL in a separate file, and we
propose methods to aggregate several URLs per file. Once we reduce file management overhead, we show that
the next largest source of overhead is associated with disk head movements due to file write requests in widely
scattered disk sectors. To improve write throughput, we propose a file space allocation algorithm inspired from
log-structured file systems [34]. Once write operations proceed at maximum speed, URL read operations emerge
A3
Cache
Clients
URL Response
Request to
Web Server
URL Request
Lookup
Cache
Web Server
Proxy Server
Response from
[3]00111100110000011111
Figure
1: Typical Web Proxy Action Sequence.
as the next largest source of overhead. To reduce the disk read overhead we identify and exploit the locality that
exists in URL access patterns, and propose algorithms that cluster several read operations together, and reorganize
the layout of the URLs on the file so that URLs accessed together are stored in nearby file locations.
We demonstrate the applicability of our approach by implementing Foxy, a web proxy that incorporates our
WebCoSM techniques that successfully remove the disk bottlenecks from the proxy's critical path. To evaluate the
performance of our approach we use a combination of trace-driven simulation and experimental evaluation. Using
simulations driven by real traces we show that the file I/O throughput can be improved by a factor of 18, enabling a
single disk proxy to serve around 500 (URL-get) operations per second. Finally, we show that our implementation
outperforms traditional proxies such as SQUID by more than a factor of 7 under heavy load.
We begin by investigating the workload characteristics of web proxies, and their implications to the file system
performance. Then, in section 2 we motivate and describe our WebCoSM techniques, and in section 3 we present
comprehensive performance results that show the superior performance potential of these techniques. We verify
the validity of the WebCoSM approach in section 4 by outlining the implementation of a proof-of-concept proxy
server and by evaluating its performance through experiments. We then compare the findings of this paper to
related research results in section 5, discuss issues regarding WebCoSM in section 6 and summarize our findings
in section 7.
Web-Conscious Storage Management Techniques
Traditional web proxies frequently require a significant number of operations to serve each URL request. Consider
a web proxy that serves a stream of requests originating from its clients. For each request it receives, the proxy
has to look-up its cache for the particular URL. If the proxy does not have a local copy of the URL (cache miss),
it requests the URL from the actual server, saves a copy of the data in its local storage, and delivers the data to the
requesting client. If, on the other hand, the proxy has a local copy of the URL (cache hit), it reads the contents
from its local storage, and delivers them to the client. This action sequence is illustrated in Figure 1.
To simplify storage management, traditional proxies [42], [8] store each URL on a separate file, which induces
several file system operations in order to serve each request. Indeed, at least three file operations are needed to
serve each URL miss: (i) an old file (usually) needs to be deleted in order to make space for the new URL, (ii)
a new file needs to be created in order to store the contents of this URL, and (iii) the new file needs to be filled
with the URLs contents. Similarly, URL hits are costly operations, because, even though they do not change
the file's data or meta-data, they invoke one file read operation, which is traditionally expensive due to disk head
movements. Thus, for each URL request operation, and regardless of hit or miss, the file system is faced with an
intensive stream of file operations.
File operations are usually time consuming and can easily become a bottleneck for a busy web proxy. For
instance, it takes more than 20 milliseconds to create even an empty file, as can be seen in Figure 2 1 . To make
matters worse, it takes another 10 milliseconds to delete this empty file. This file creation and deletion cost
approaches milliseconds for large files (10 Kbytes). Given that for each URL-miss the proxy should create and
delete a file, the proxy will only be able to serve one URL miss every 60 milliseconds, or equivalently, less than
To quantify performance limitations of file system operations, we experimented with the HBENCH-OS file system benchmark [6].
HBENCH-OS evaluates file creation/deletion cost by creating 64 files of a given size in the same directory and then deleting them in reverse-
of-creation order. We run HBENCH-OS on an UltraSparc-1 running Solaris 5.6, for file sizes of 0 to 10 Kbytes. Our results, plotted in Figure
2, suggest that the time required to create and delete a file is very high.
File Size (in Kbytes)1030Time
per
operation
(in
File Create
File Delete
Figure
2: File Management Overhead. The figure plots the cost of file creation and file deletion operations
as measured by the HBENCH-OS (latfs). The benchmark creates 64 files and then deletes them in the order of
creation. The same process is repeated for files of varying sizes. We see that the time to create a file is more than
20 msec, while the time to delete a file is between 10 and 20 msec.
per second which provide clients with no more than 100-200 Kbytes of data. Such a throughput
is two orders of magnitude smaller than most modern magnetic disks provide. This throughput is even smaller
than most Internet connections. Consequently, the file system of a web-proxy will not be able to keep up with the
proxy's Internet requests. This disparity between the file system's performance and the proxy's needs is due to
the mismatch between the storage requirements needed by the web proxy and the storage guarantees provided by
the file system. We address this semantic mismatch in two ways: meta-data overhead reduction, and data-locality
exploitation.
2.1 Meta-Data Overhead Reduction
Most of the meta-data overhead that cripples web proxy performance can be traced to the storage of each URL
in a separate file. To eliminate this performance bottleneck we propose a novel URL-grouping method (called
BUDDY), in which we store all the URLs into a small number of files. To simplify space management within each
of these files, we use the URL size as the grouping criterion. That is, URLs that need one block of disk space (512
bytes) are all stored in the same file, URLs that need two blocks of disk space (1024 bytes) are all stored in another
file, and so forth. In this way, each file is composed of fixed-sized slots, each large enough to contain a URL. Each
new URL is stored in the first available slot of the appropriate file. The detailed behavior of BUDDY is as follows:
Initially, BUDDY creates one file to store all URLs that are smaller than one block, another file to store all
URLs that are larger than a block, but smaller than two, and so on. URLs larger than a predefined threshold
are stored in separate files - one URL per file.
ffl On a URL-write request for a given size, BUDDY finds the first free slot on the appropriate file, and stores
the contents of the new URL there. If the size of the contents of the URL is above the threshold (128 Kbytes
in most of our experiments), BUDDY creates a new file to store this URL only.
ffl When a URL needs to be replaced, BUDDY marks the corresponding slot in the appropriate file as free. This
slot will be reused at a later time to store another URL.
ffl On a URL-read request, BUDDY finds the slot in the appropriate file and reads the content of the requested
URL.
The main advantage of BUDDY is that it practically eliminates the overhead of file creation/deletion operations
by storing potentially thousands of URLs per file. The URLs that occupy a whole file of their own, are large
enough and represent a tiny percentage of the total number of URLs, so that their file creation/deletion overhead
is not noticeable overall.
2.2 Data-Locality Exploitation
Although BUDDY reduces the file management overhead, it makes no special effort to layout data intelligently
on disk in a way that improves write or read performance. However, our experience suggests that a significant
amount of locality exists in the URL reference streams; identifying and exploiting this locality can result in large
performance improvements.
2.2.1 Optimizing Write throughput
Both traditional proxies and our proposed BUDDY technique write new URL data in several different files scattered
all over the disk, possibly requiring a head movement operation for each URL write. The number of these head
movements can be significantly reduced if we write the data to the disk in a log-structured fashion. Thus, instead
of writing new data in some free space on the disk, we continually append data to it until we reach the end of
the disk, in which case we continue from the beginning. This approach has been widely used in log-structured
file systems [5, 17, 29, 38]. However, unlike previous implementations of log-structured file systems, we use a
user-level log-structured file management, which achieves the effectiveness of log-structured file systems on top
of commercial operating systems.
Towards this end, we develop a file-space management algorithm (called STREAM) that (much-like log-structured
file systems) streams write operations to the disk: The web proxy stores all URLs in a single file
organized in slots of 512 bytes long. Each URL occupies an integer number of (usually) contiguous slots. URL-
read operations read the appropriate portions of the file that correspond to the given URL. URL-write operations
continue appending data to the file until the end of the file, in which case, new URL-write operations continue
from the beginning of the file writing on free slots. URL-delete operations mark the space currently occupied the
URL as free, so they can later be reused by future URL-write operations.
2.2.2 Improving Read Requests
While STREAM does improve the performance of write operations, URL-read operations still suffer from disk seek
and rotational overhead, because the disk head must move from the point it was writing data to disk to the point
from where it should read the data. To make matters worse, once the read operation is completed, the head must
move back to the point it was before the read operation and continue writing its data onto the disk. For this reason,
each read operation within a stream of writes, induces two head movements: the first to move the head to the
reading position, and the second to restore the head in the previous writing position, resulting in a ping-pong effect.
To reduce this overhead, we have developed a LAZY-READS technique which extends STREAM so that it batches
read operations. When a URL-read operation is issued, it is not serviced immediately, but instead, it is sent into
an intermediate buffer. When the buffer fills up with read requests (or when a timeout period expires), the pending
read requests are forwarded to the file system, sorted according to the position (in the file) of the data they want to
read. Using LAZY-READS, a batch of N URL-read requests can be served with at most N movements.
Figure
3 illustrates the movements of the heads before and after LAZY-READS. Although LAZY-READS appear to
increase the latency of URL-read operations, a sub-second timeout period guarantees unnoticeable latency increase
for read operations.
2.2.3 Preserving the Locality of the URL stream
The URL requests that arrive at a web proxy exhibit a significant amount of spatial locality. For example, consider
the case of an HTML page that contains several embedded images. Each time a client requests the HTML page,
it will probably request all the embedded images as well. That is, the page and its embedded images are usually
accessed together as if they were a single object. An indication of this relationship is that all these requests
are sent to the proxy server sequentially within a short time interval. Unfortunately, these requests do not arrive
Head
Position
Time
Read Operations
(6 head movements)
Stream of Write Operations
(a) Without LAZY-READS
Head
Position
Time
Read Operations
(4 head movements)
Stream of Write Operations
(b) With LAZY-READS
Figure
3: Disk Head Distance. Using the STREAM technique, the disk receives a stream of write requests to
contiguous blocks interrupted only by read requests which cause the ping-pong effect. In Part (a) three read
requests are responsible for six long head movements. In Part (b) the LAZY-READS technique services all three
read requests together, with only four disk head movements.
sequentially to the proxy server; instead, they arrive interleaved with requests from several other clients. Therefore,
web objects requested contiguously by a single client, may be serviced and stored in the proxy's disk sub-system
interleaved with web objects requested from totally unrelated clients. This interleaved disk layout may result in
significant performance degradation because future accesses to each one of the related web objects may require a
separate disk head movement. To make matters worse, this interleaving may result in serious disk fragmentation:
when the related objects are evicted from the disk, they will leave behind a set of small, non-contiguous portions
To recover the lost locality, we augmented the STREAM and LAZY-READS techniques with an extra level of
buffers (called locality buffers) between the proxy server and the file system. Each locality buffer is associated with
a web server, and accumulates objects that originate from that web server. Instead of immediately writing each
web object in the next available disk block, the proxy server places the object into the locality buffer associated
with its origin server. If no such locality buffer can be found, the proxy empties one of the locality buffers by
flushing it to the disk, and creates a new association between the newly freed locality buffer and the object's web
server. When a buffer fills-up, or is evicted, its contents are sent to the disk and are probably written in contiguous
disk locations. Figure 4 outlines the operation of a proxy server augmented with locality buffers. The figure shows
three clients, each requesting a different stream of web objects (A1-A3, B1-B2, and C1-C2). The requests arrive
interleaved at the proxy server, which will forward them over the Internet to the appropriate web servers. Without
locality buffers, the responses will be serviced and stored to the disk in an interleaved fashion. The introduction of
the locality buffers groups the requested web objects according to their origin web server and stores the groups to
the disk as contiguously as possible, reducing fragmentation and interleaving. Future read operations will benefit
from the reduced interleaving through the use of prefetching techniques that will read multiple related URLs with
a single disk I/O. Even future write operations will benefit from the reduced fragmentation, since they will be able
to write more data on the disk in a contiguous fashion.
3 Simulation-based Evaluation
We evaluate the disk I/O performance of web proxies using a combination of simulation and experimental
evaluation. In the simulation study, we model a web proxy with a storage system organized as a two level cache:
a main-memory and a disk cache. Using traces obtained from busy web proxies, we drive the two-level cache
simulator, which in turn generates the necessary secondary storage requests. These requests are translated into file
system operations and are sent to a Solaris UFS file system and from it, to an actual magnetic disk. Therefore, in
our methodology we use simulation to identify cache read, write, and delete operations, which are then sent to a
real storage system in order to accurately measure their execution time. Thus, our methodology combines the ease
A3 A2 A1
A3 A2 C1
Proxy Server
A3
Client Requests
A3
Locality Buffers
Magnetic2
Figure
4: Streaming into Locality Buffers. The sequences of client requests arrive interleaved at the proxy
server. The proxy server groups the requested objects in the stream into the available locality buffers, and writes
the rearranged stream to the disk. For the sake of simplicity we have omitted the forwarding of the client requests
over the Internet to the appropriate web server, and the web server's responses.
URL-delete
File Space
Simulator
Simulation
Disk Cache
Simulation
Main Memory Traces
Disks
Magnetic
URL-write
Disk Cache
URL-read
Main
Memory
File Access
Proxy
Traces
File Access Calls
mmap/write
lseek, read,
(Simulated)
Figure
5: Evaluation Methodology. Traces from the DEC's web proxy are fed into a 512-Mbyte main memory
LRU cache simulator. URLs that miss the main memory cache are fed into a 2-Gbyte disk LRU cache simulator.
URLs that miss this second-level cache are assumed to be fetched from the Internet. These misses generate
URL-write requests because once they fetch the URL from the Internet they save it on the disk. Second-level
URL hits generate URL-read requests, since they read the contents of the URL from the disk. To make
space for the newly arrived URLs, the LRU replacement policy deletes non-recently accessed URLs resulting
in URL-delete requests. All URL-write, URL-read, and URL-delete requests are fed into a file space
simulator which maps URLs into files (or blocks within files) and sends the appropriate calls to the file system.
Request Type Number of Requests Percentage
URL-read 42,085 4%
URL-write 678,040 64%
Main Memory Hits 336,081 32%
Total 1,058,206 100%
Table
1: Trace Statistics.
of trace-driven simulation with the accuracy of experimental evaluation.
3.1
Our traces come from a busy web proxy located at Digital Equipment Corporation
(ftp://ftp.digital.com/pub/DEC/traces/proxy/webtraces.html). We feed these traces to a
512 Mbyte-large first-level main memory cache simulator that uses the LRU replacement policy 2 . URL requests
that hit in the main memory cache do not need to access the second-level (disk) cache. The remaining URL
requests are fed into a second-level cache simulator, whose purpose is to generate a trace of URL-level disk
requests: URL-read, URL-write, and URL-delete. URL-read requests are generated as a result of a second-level
cache hit. Misses in the second-level cache are assumed to contact the appropriate web server over the Internet, and
save the server's response in the disk generating a URL-write request. Finally, URL-delete requests are generated
when the secondary storage runs out of space, and an LRU replacement algorithm is invoked to delete unused
URLs.
The generated trace of URL-read, URL-delete, and URL-write requests is sent to a file-space management
simulator which forwards them to a Solaris UFS file system that reads, deletes, and writes the contents of URLs
as requested. The file-space management simulator implements several secondary storage management policies,
ranging from a simple "one URL per file" (SQUID-like), to STREAM, LAZY-READS, and LOCALITY-BUFFERS.
The Solaris UFS system runs on top of an ULTRA-1 workstation running Solaris 5.6, equipped with a Seagate
ST15150WC 4-Gbyte disk with 9 millisecond average latency, 7200 rotations per minute, on which we measured
a maximum write throughput of 4.7 Mbytes per second. Figure 5 summarizes our methodology.
In all our experiments, we feed the simulator pipeline with a trace of one million URL read, write, and
delete requests that are generated by 1,058,206 URL-get requests. Table 1 summarizes the trace statistics. The
performance metric we use, is the total completion time needed to serve all one million requests. This time is
inversely proportional to the system's throughput (operations per second) and thus is a direct measure of it. If, for
example, the completion time reported is 2000 seconds, then the throughput of the system is
URL-get requests per second.
Another commonly used performance metric of web proxies, especially important for the end user, is the
service latency of each URL request. However, latency (by itself) can be a misleading performance metric for our
work because significant relative differences in latency can be unnoticeable to the end user, if they amount to a
small fraction of a second. For example, a proxy that achieves request latency may appear
twice as good as a proxy that achieves 60 millisecond average request latency, but the end user will not perceive
any difference. We advocate that, as long as the latency remains within unperceivable time ranges, the proxy's
throughput is a more accurate measure of the system's performance.
3.2 Evaluation
We start our experiments by investigating the performance cost of previous approaches that store one URL per file
and comparing them with our proposed BUDDY that stores several URLs per file, grouped according to their size.
We consider three such approaches, SINGLE-DIRECTORY, SQUID, and MULTIPLE-DIRS:
ffl SINGLE-DIRECTORY, as the name implies, uses a single directory to store all the URL files.
ffl SQUID (used by the SQUID proxy server) uses a two-level directory structure. The first level contains
directories (named 0.F), each of which contains 256 sub-directories (named 00.FF). Files are stored in the
second level directories in a round robin manner.
Although more sophisticated policies than LRU have been proposed they do not influence our results noticeably.
Number of URL requests
1:00:00
2:00:00
3:00:00
5:00:00
Completion
time
SQUID
MULTIPLE-DIRS
Figure
File Management Overhead for Web Proxies. The figure plots the overhead of performing 300,000
URL-read/URL-write/URL-delete operations that were generated by 398,034 URL-get requests for a 1-Gbyte
large disk. It is clear that BUDDY, improves the performance considerably compared to all other approaches.
ffl MULTIPLE-DIRS creates one directory per server: all URLs that correspond to the same server are stored in
the same directory.
Our experimental results confirm that BUDDY improves performance by an order of magnitude compared to
previous approaches. Indeed, as Figure 6 shows, BUDDY takes forty minutes to serve 300,000 URL requests, while
the other approaches require from six to ten times more time to serve the same stream of URL requests. BUDDY is
able to achieve such an impressive performance improvement because it does not create and delete files for URLs
smaller than a predefined threshold. Choosing an appropriate threshold value can be important to the performance
of BUDDY. A small threshold will result in frequent file create and delete operations, while a large threshold will
require a large number of BUDDY files that may increase the complexity of their management.
Figure
7 plots the completion time as a function of the threshold under the BUDDY management policy. We see
that as the threshold increases, the completion time of BUDDY improves quickly, because an increasing number of
URLs are stored in the same file, eliminating a significant number of file create and delete operations. When the
threshold reaches 256 blocks (i.e. 128 Kbytes), we get (almost) the best performance. Further increases do not
improve performance noticeably. URLs larger than 128 Kbytes should be given a file of their own. Such URLs
are rare and large, so that the file creation/deletion overhead is not noticeable.
3.3 Optimizing Write Throughput
Although BUDDY improves performance by an order of magnitude compared to traditional SQUID-like approaches,
it still suffers from significant overhead because it writes data into several different files, requiring (potentially
long) disk seek operations. Indeed, a snapshot of the disk head movements (shown in Figure 8 taken with TazTool
[9]) reveals that the disk head traverses large distances to serve the proxy's write requests. We can easily see that
the head moves frequently within a region that spans both the beginning of the disk (upper portion of the figure)
and the end of the disk (lower portion of the figure). Despite the clustering that seems to appear at the lower
quarter of the disk and could possibly indicate some locality of accesses, the lower portion of the graph that plots
the average and maximum disk head distance, indicates frequent and long head movements.
To eliminate the long head movements incurred by write operations in distant locations, STREAM stores all
URLs in a single file and writes data to the file as contiguously as possible, much like log-structured file systems
THRESHOLD (Kbytes)50.0150.0250.0Completion
time
(minutes)
Figure
7: Performance of BUDDY as a function of threshold. The figure plots the completion time as a function
of BUDDY's threshold parameter. The results suggest that URLs smaller than 64-128 Kbytes should be ``buddied''
together. URLs larger than that limit can be given a file of their own (one URL per file) without any (noticeable)
performance penalty.
do. Indeed, a snapshot of the disk head movements (Figure shows that STREAM accesses data on the disk mostly
sequentially. The few scattered accesses (dots) that appear on the snapshot, are not frequent enough to undermine
the sequential nature of the accesses.
Although STREAM obviously achieves minimal disk head movements, this usually comes at the cost of extra
disk space. Actually, to facilitate long sequential write operations, both log-structured file systems and STREAM
never operate with a (nearly) full disk. It is not surprising for log-structured file systems to operate at a disk
utilization factor of 60% or even less [34], because low disk utilization increases the clustering of free space, and
allows more efficient sequential write operations 3 Fortunately, our experiments (shown in Figure 10) suggest that
STREAM can operate very efficiently even at 70% disk utilization, outperforming BUDDY by more than a factor
of two. As expected, when the disk utilization is high (90%-95%), the performance of BUDDY and STREAM are
comparable. However, when the disk utilization decreases, the performance of STREAM improves rapidly.
When we first evaluated the performance of STREAM, we noticed that even when there was always free disk
space available and even in the absence of read operations, STREAM did not write to disk at maximum throughput.
We traced the problem and found that we were experiencing a small-write performance problem: writing a small
amount of data to the file system, usually resulted in both a disk-read and a disk-write operation. The reason for
this peculiar behavior is the following: if a process writes a small amount of data in a file, the operating system
will read the corresponding page from the disk (if it is not already in the main memory), perform the write in the
main memory page, and then, at a later time, write the entire updated page to the disk.
To reduce these unnecessary read operations incurred by small writes, we developed a packetized version of
STREAM, STREAM-PACKETIZER, that works just like STREAM with the following difference:
URL-write operations are not forwarded directly to the file system - instead they are accumulated into
a page-boundary-aligned one-page-long packetizer buffer, as long as they are stored contiguously to
the previous URL-write request. Once the packetizer fills up, or if the current request is not contiguous
to the previous one, the packetizer is sent to the file system to be written to the disk.
3 Fortunately, recent measurements suggest that most file systems are about half-full on the average [11], and thus, log-structured approaches
for file management may be more attractive than ever, especially at the embarrassingly decreasing cost of disk space [10].
Time
Distance
Disk Head
Range
Figure
8: Disk Access Pattern of BUDDY. This snapshot was taken with TazTool, a disk head position plotting
Time
Distance
Disk Head
Range
Figure
9: Disk Access Pattern of STREAM. This snapshot was taken with TazTool, a disk head position plotting
Figure
10: Performance of BUDDY and STREAM as a function of disk (space) utilization. The figure plots
the completion time for serving 1,000,000 URL operations as a function of disk utilization. As expected, the
performance of BUDDY is unaffected by the disk utilization, and the performance of STREAM improves as disk
utilization decreases. When the disk utilization is around 70% STREAM outperforms BUDDY by more than a factor
of two.
STREAM-PACKETIZER
Figure
11: Performance of STREAM and STREAM-PACKETIZER as a function of disk (space) utilization. The
figure plots the completion time for serving 1,000,000 URL operations as a function of disk utilization. STREAM
consistently outperforms STREAM-PACKETIZER, by as much as 20% for low disk utilizations.
In this way, STREAM-PACKETIZER instead of sending a large number of small sequential write operations to the file
system (like STREAM does), it sends fewer and larger (page size long) write operations to the file system. Figure 11
plots the performance of STREAM and STREAM-PACKETIZER as a function of disk utilization. STREAM-PACKETIZER
performs consistently better than STREAM, by as much as 20% when disk utilization is low, serving one million
requests in less than three thousand seconds, achieving a service rate of close to 350 URL-get operations per
second.
3.4 Improving Read Requests
While STREAM improves the performance of URL-write operations, URL-read operations still suffer from seek
and rotational latency overhead. A first step towards improving the performance of read operations, LAZY-
READS, reduce this overhead by clustering several read operations and by sending them to the disk together. This
grouping of read requests not only reduces the disk head ping-pong effect but also presents the system with better
opportunities for disk head scheduling. Figure 12 shows that LAZY-READS consistently improve the performance
over STREAM-PACKETIZER by about 10% 4 . Although a 10% performance improvement may not seem impressive
at first glance, we believe that the importance of LAZY-READS will increase in the near future. In our experimental
environment, read requests represent a small percentage (a little over 6%) of the total disk operations. Therefore,
4 The careful reader will notice however, that LAZY-READS may increase operation latency. However, we advocate that such an increase will
be unnoticeable by end-users. Our trace measurements show that STREAM-PACKETIZER augmented with LAZY-READS is able to serve 10-20
read requests per second (in addition) to the write requests. Thus LAZY-READS will delay the average read operation only by a fraction of the
second. Given that the average web server latency may be several seconds long [2], LAZY-READS impose an unnoticeable overhead. To make
sure that no user ever waits an unbounded amount of time to read a URL from the disk even in an unloaded system, LAZY-READS can also be
augmented with a time out period. If the time out elapses then all the outstanding read operations are sent to disk.
STREAM-PACKETIZER
LAZY-READS
Figure
12: Performance of LAZY-READS. The figure plots the completion time for serving 1,000,000 URL
operations as a function of 2-Gbyte disk utilization. LAZY-READS gathers reads requests ten-at-a-time and issues
them all at the same time to the disk reducing the disk head movements between the write stream and the data
read. The figure shows that LAZY-READS improves the performance of STREAM-PACKETIZER by 10%.
even significant improvements in the performance of read requests will not necessarily yield significant overall
performance gains. With the increasing size of web caches, the expected hit rates will probably increase, and the
percentage of disk read operations will become comparable to (if not higher than) the percentage of disk write
operations. In this case, optimizing disk read performance through LAZY-READS or other similar techniques will
be increasingly important.
3.5 Preserving the Locality of the URL stream
3.5.1 The Effects of Locality Buffers on Disk Access Patterns
To improve the performance of disk I/O even further, our locality buffers policies (LAZY-READS-LOC and STREAM-
improve the disk layout by grouping requests according to their origin web server before storing them to the
disk. Without locality buffers, the available disk space tends to be fragmented and spread all over the disk. With
locality buffers, the available disk space tends to be clustered in one large empty portion of the disk. Indeed, the
two-dimensional disk block map in Figure 13(b), shows the available free space as a long white stripe. On the
contrary, in the absence of locality buffers, free space tends to be littered with used blocks shown as black dots
Figure
13(a)). Even when we magnify a mostly allocated portion of the disk (Figure 13 (a) and (b) right), small
white flakes begin to appear within the mostly black areas, corresponding to small amounts of free disk space
within large portions of allocated space. We see that locality buffers are able to cluster the white (free) space more
effectively into sizable and square white patches, while in the absence of locality buffers, the free space if clustered
into small and narrow white bands.
Figure
14 confirms that locality buffers result in better clustering of free space, plotting the average size of
Algorithm Performance
URL-get operations
per second
Table
2: Performance of traditional and Web-Conscious Storage Management techniques (in URL-get
operations per second).
chunks of contiguous free space as a function of time. After the warm-up period (about 300 thousand requests),
locality buffers manage to sustain an average free chunk size of about 145 Kbytes. On the contrary, the absence of
locality buffers (STREAM) exhibits a fluctuating behavior with an average free chunk size of only about Kbytes.
Locality buffers not only cluster the free space more effectively, they also populate the allocated space with
clusters of related documents by gathering URLs originating from each web server into the same locality buffer,
and in (probably) contiguous disk blocks. Thus, future read requests to these related web documents will probably
access nearby disk locations. To quantify the effectiveness of this related object clustering, we measure the
distance (in file blocks) between successive disk read requests. Our measurements suggest that when using locality
buffers, a larger fraction of read requests access nearby disk locations. Actually, as many as l,885 read requests
refer to the immediately next disk block to their previous read request, compared to only 611 read requests in the
absence of locality buffers (as can be seen from Figure 15). Furthermore, locality buffers improve the clustering
of disk read requests significantly: as many as 8,400 (17% of total) read requests fall within ten blocks of their
previous read request, compared to only 3,200 (6% of total) read requests that fall within the same range for
STREAM-PACKETIZER. We expect that this improved clustering of read requests that we observed, will eventually
lead to performance improvements, possibly through the use of prefetching.
3.5.2 Performance Evaluation of LOCALITY BUFFERS
Given the improved disk layout of LOCALITY BUFFERS, we expect that the performance of LAZY-READS-LOC
to be superior to that of LAZY-READS and of STREAM-PACKETIZER. In our experiments we vary the number of
locality buffers from 8 up to 128; each locality buffer is 64-Kbytes large. Figure 16 shows that as few as eight
locality buffers (LAZY-READS-LOC-8) are sufficient to improve performance over LAZY-READS between 5% and
20%, depending on the disk utilization. However, as the number of locality buffers increases, the performance
advantage of LAZY-READS-LOC increases even further. Actually, at 76% disk utilization, LAZY-READS-LOC with
128 locality buffers performs 2.5 times better than both LAZY-READS and STREAM-PACKETIZER.
We summarize our performance results in Table 2 presenting the best achieved performance (measured in
URL-get operations per second) for each of the studied techniques. We see that the Web-Conscious Storage
Management techniques improve performance by more than an order of magnitude, serving close to 500 URL-get
operations per second, on a single-disk system. Actually, in our experimental environment, there is little room
for any further improvement. LAZY-READS-LOC-128 transfers 7.6 Gbytes of data (both to and from secondary
storage) in 2,020 seconds, which corresponds to a sustained throughput of 3.7 Mbytes per second. Given that
the disk used in our experiments can sustain a maximum write throughput of 4.7 Mbytes per second, we see
that our WebCoSM techniques achieve up to 78% of the maximum (and practically unreachable) upper limit in
performance. Therefore, any additional, more sophisticated techniques are not expected to result in significant
performance improvements (at least in our experimental environment).
(b)
Figure
13: Disk Fragmentation Map. The Figure plots a two-dimensional disk block allocation map at the end
of our simulations for STREAM (a) and LOCALITY BUFFERS (b). We plot allocated blocks with black dots and free
blocks with white dots. The beginning of the disk is plotted at the lower left corner, the rest of the disk is plotted
following a column-major order, and finally, the end of the disk is plotted at the top right corner.
Average
size
of
a
Hole
(Kbytes)
Time (Thousands of URL requests)
Locality Buffers
Figure
14: Average size of free disk blocks. The Figure plots the average size of chunks of contiguous free space
as a function of time.200600100014001800-30 -20
Number
of
Read
Operations
Distance from Previous Read Request (in File Blocks)
Locality Buffers
Figure
15: Distribution of distances for Read Requests. The Figure plots the histogram of the block distances
between successive read operations for STREAM-PACKETIZER and STREAM-LOC (using 128 Locality Buffers).
Disk Space Utilization (%)1000300050007000
Completion
time
STREAM-PACKETIZER
LAZY-READS
Figure
Performance of LAZY-READS-LOC. The figure plots the completion time for serving 1,000,000 URL
operations as a function of disk utilization. LAZY-READS-LOC attempts to put URLs from the same server in
nearby disk locations by clustering them in locality buffers before sending them to the disk.
Implementation
To validate our Web-Conscious Storage Management approach, we implemented a lightweight user-level web
proxy server called Foxy. Our goal in developing Foxy was to show that WebCoSM management techniques
(i) can be easily implemented, (ii) can provide significant performance improvements, and (iii) require neither
extensive tuning nor user involvement.
Foxy consists of no more that 6,000 lines of C code, and implements the basic functionality of an HTTP web
proxy. For simplicity and rapid prototyping, Foxy implements the HTTP protocol but not the not-so-frequently
used protocols like FTP, ICP, etc. Foxy, by being first and foremost a caching proxy, uses a two level caching
approach. The proxy's main memory holds the most frequently accessed documents, while the rest reside in the
local disk. To provide an effective and transparent main memory cache, Foxy capitalizes on the existing file buffer
cache of the operating system.
The secondary-storage system management of Foxy stores all URLs in a single file, according to the STREAM-
PACKETIZER policy: URLs are contiguously appended to the disk. When the disk utilization reaches a high
watermark, a cache replacement daemon is invoked to reduce the utilization below a low watermark. The
replacement policy used is LRU-TH [1], which replaces the least recently used documents from the cache. In order
to prevent large documents from filling-up the cache, LRU-TH does not cache documents larger than a prespecified
threshold. Foxy was developed on top of Solaris 5.7, and has also being tested on top of Linux 2.2.
4.1 Design of the Foxy Web Proxy
The design and architecture of the Foxy Web Cache follows the sequence of operations the proxy must perform in
order to serve a user request (mentioned here also as "proxy request" or "proxy connection"). Each user request
is decomposed into a sequence of synchronous states that form a finite state machine (FSM) shown in Figure 17.
Based on this FSM diagram, Foxy functions as a pipelined scheduler in an infinite loop. At every step of the loop,
the scheduler selects a ready proxy request, performs the necessary operations, and advances the request into the
next state.
The rectangles in Figure 17 represent the various states that comprise the processing of a request, while
the clouds represent important asynchronous actions taking place between two states. Finally, lines represent
transitions between states and are annotated to indicate the operations that must be completed in order to transition
Incomplete
Transfer
While
Incomplete
Transfer
While
New Client
Connection
Object IS NOT in Cache
Lookup Object in Index
Connection
Failed
(Retry)
Receive WWW Object
Parse HTTP Response
Object IS in Cache
Send the Object to Client
Read Object from
Cache
Established
HTTP Response
Receiving
Connection
Close TCP Connection
Connection or DNS
Store (?) Object
to Cache
HTTP or TCP
Finished
Transfer
Object Read
Cache Read
FINAL
Sent to Client
Object Received &
message to client
EXCEPTION
Request
Open TCP Connection
to remote Web Server
Object to Client
Request
Figure
17: The Foxy Processing Finite State Machine Diagram.
Number of Clients 100
Number of Servers 4
Server Response Time 1 second
Document Hit Rate 40%
Request Distribution zipf(0.6)
Document Size Distribution exp(5 Kbytes)
Cacheable Objects 100%
Table
3: The WebPolygraph Parameters Used in our Experiments.
to occur. Every request begins at "STATE 1", where the HTTP request is read and parsed. Then, Foxy searches
its index for the requested object. The index contains the metadata of the objects stored in the Foxy's Cache
(e.g. name and size of each object, its storage location, the object's retrieval time, etc. If the search for the
requested object is successful, Foxy issues a read request to the cache. When the object is read from the cache, a
transition to "STATE 5" is made, where the object is returned to the client. After a successful transfer, the client
TCP/IP connection is closed in "STATE 6", and the processing of the HTTP request is completed. If the search
for the requested object is unsuccessful, Foxy, in "STATE 2", performs a DNS lookup for the IP address of the
remote Web server and then opens a TCP/IP connection to it. When the TCP/IP connection is established, Foxy,
in "STATE 3", sends an HTTP request to the origin web server. The server response is parsed in "STATE 4", and
Foxy receives the object's content over the (possibly slow) Internet connection. Foxy passes the object through to
the requesting client as it receives it from the remote server, until all the object's contents are transferred. Then,
the Foxy uses a cache admission policy (LRU-TH) to decide whether this object should be cached, and is so,
the object's content is stored in the (disk) cache using the STREAM-PACKETIZER algorithm, and its corresponding
metadata are stored in the index. Finally, the TCP connection is closed in "STATE 6", and the processing of the
HTTP request is completed.
4.2 Experiments
To measure the performance of Foxy and compare it to that of SQUID, we used the Web Polygraph proxy
performance benchmark (version 2.2.9), a de facto industry standard for benchmarking proxy servers [33]. We
configured Web Polygraph with 100 clients and four web servers each responding with a simulated response time
of one second. Table 3 summarizes the Web Polygraph configuration used in our experiments. We run Web
Polygraph on a SUN Ultra 4500 machine with four UltraSparc-2 processors at 400MHz. The machine runs Solaris
5.7, and is equipped with a Seagate ST318203FSUN1-8G, 18-Gbyte disk. The secondary storage size that both
SQUID and Foxy use is 8 Gbytes. We used the 2.2.STABLE4 version of SQUID, and we configured it according to
the settings used in the Second Polygraph Bake-off [36]. To reduce the effects of a possibly slow network, we run
all processes on the same computer.
The results from our experiments are presented in Figures 18, 19, and 20. Figure plots the throughput of
SQUID and Foxy, as a function of the client demands (input load) that ranges from 40 to 350 requests per second.
We see that for small input load (less than 80 requests per second), the throughput of both proxies increases linearly.
However, the throughput of SQUID quickly levels-off and decreases at 90 requests per second, while Foxy sustains
the linear increase up to 340 requests per second, giving a factor of four improvement over SQUID.
The deficiencies of SQUID are even more pronounced in Figure 19, which plots the average response time
achieved by the two proxies as a function of the input load. We can easily see that SQUID's latency increases
exponentially for input load higher than 50 requests per second. Thus, although Figure suggests that SQUID can
achieve a throughput of 90 requests per second, this throughput comes at a steep increase of the request latency as
seen by the end user (almost 8 seconds). The maximum throughput that SQUID can sustain without introducing a
noticeable increase in latency is around 50 requests per second. On the contrary, Foxy manages to serve more than
340 requests per second without any noticeable increase in the latency. In fact, Foxy can serve up to 340 requests
per second, with a user latency of about 0.7 seconds. Therefore, for the acceptable (sub-second) latency ranges,
Foxy achieves almost 7 times higher throughput than SQUID.
To make matters worse, SQUID not only increases the perceived end-user latency, but it also increases the
network traffic required. In fact, when the disk sub-system becomes overloaded, SQUID, in an effort to off-load
the disk, may forward URL requests to the origin server, even if it has a local copy in its disk cache. This behavior
Observed
Throughput
(requests/second)
Input Load (requests/second)
FOXY
SQUID
Figure
18: Throughput of SQUID and FOXY.13579
Avg
Response
Time
per
Request
(sec)
Input Load (requests/second)
SQUID
FOXY
Figure
19: Latency of SQUID and FOXY.
Network
Bandwidth
(KBytes/Sec)
Input Load (requests/second)
SQUID
FOXY
Figure
20: Network Bandwidth Requested by SQUID and FOXY.
may increase network traffic significantly. As Figure 20 shows when the input load is less than 50 requests per
second, both SQUID and Foxy require the same network bandwidth. When the input load increases beyond 50
requests per second, the network bandwidth required by Foxy increases linearly with the input load as expected.
However, the network bandwidth required by SQUID increases at a higher rate, inducing 45% more network traffic
than Foxy at 110 URL requests per second.
5 Previous Work
Caching is being extensively used on the web. Most web browsers cache documents in main memory or in local
disk. Although this is the most widely used form of web caching, it rarely results in high hit rates, since the
browser caches are typically small and are accessed by a single user only [1]. To further improve the effectiveness
of caching, proxies are being widely used at strategic places of the Internet [8, 42]. On behalf of their users,
caching proxies request URLs from web servers, store them in a local disk, and serve future requests from their
cache whenever possible. Since even large caches may eventually fill up, cache replacement policies have been the
subject of intensive research [1, 7, 23, 26, 32, 37, 43, 44]. Most of the proposed caching mechanisms improve the
user experience, reduce the overall traffic, and protect network servers from traffic surges [15]. To improve hit
rates even further and enhance the overall user experience, some proxies may even employ intelligent prefetching
methods [4, 12, 16, 41, 31, 40].
As the Internet traffic grew larger, it was realized that Internet servers were bottlenecked by their disk subsystem.
Fritchie found that USENET news servers spend a significant amount of time storing news articles in files "one
file per article" [14]. To reduce this overhead he proposes to store several articles per file and to manage each
file as a cyclic buffer. His implementation shows that storing several news articles per file results in significant
performance improvement.
Much like news servers, web proxies also spend a significant percentage of their time performing disk I/O.
Rousskov and Soloviev observed that disk delays contribute as much as 30% towards total hit response time [35].
Mogul suggests that disk I/O overhead of disk caching turns out to be much higher than the latency improvement
from cache hits [28]. Thus, to save the disk I/O overhead the proxy is typically run in its non-caching mode [28].
To reduce disk I/O overhead, Soloviev and Yahin suggest that proxies should have several disks [39] in order
to distribute the load among them, and that each disk should have several partitions in order to localize accesses
to related data. Unfortunately, Almeida and Cao [2] suggest that adding several disks to exiting traditional web
proxies usually offers little (if any) performance improvement.
Maltzahn, Richardson and Grunwald [24] measured the performance of web proxies and found that the disk
subsystem is required to perform a large number of requests for each URL accessed and thus it can easily become
the bottleneck. In their subsequent work they propose two methods to reduce disk I/O for web proxies [25]:
ffl they store URLs of the same origin web server in the same proxy directory (SQUIDL), and
ffl they use a single file to store all URLs less than 8 Kbytes in size (SQUIDM).
Although our work and [25] shares common goals and approaches toward reducing disk meta-data accesses in
web proxies, our work presents a clear contribution towards improving both data and meta-data access overhead:
ffl We propose and evaluate STREAM and STREAM-PACKETIZER, two file-space management algorithms that
(much like log-structured file systems) optimize write performance by writing data contiguously on the disk.
ffl We propose and evaluate LAZY-READS and LAZY-READS-LOC, two methods that reduce disk seek overhead
associated with read operations.
The performance results reported both in [25] and this paper, verify that meta-data reduction methods improve
performance significantly. For example, Maltzahn et al. report that on a Digital AlphaStation 250 4/266 with 512
Mbytes of RAM and three magnetic disks, SQUID is able to serve around 50 URL-get requests per second, and
their best performing SQUIDMLA approach is able to serve around 150 requests per second. Similarly, on our Sun
Ultra-1 at 166 MHz with 384 Mbytes RAM and a single magnetic disk, SQUID is able to serve around 27 URL-
get requests per second, while BUDDY, the simplest WebCoSM technique that reduces only meta-data overhead,
achieves around 133 requests per second. Furthermore, the remaining WebCoSM techniques that improve not
only meta-data, but also the data accesses, are able to achieve close to 500 URL-get requests per second.
Most web proxies have been implemented as user-level processes on top of commodity (albeit state-of-the-
art) file-systems. Some other web proxies were built on top of custom-made file systems or operating systems.
NetCache was build on top of WAFL, a file system that improves the performance of write operations [18]. Inktomi's
traffic server uses UNIX raw devices [20]. CacheFlow has developedCacheOS, a special-purpose operating system
for proxies [19]. Similarly, Novell has developed a special-purpose system for storing URLs: the Cache Object
Store [22]. Unfortunately very little information has been published about the details and performance of such
custom-made web proxies, and thus a direct quantitative comparison between our approach and theirs is very
difficult. Although custom-made operating systems and enhanced file-systems can offer significant performance
improvements, we choose to explore the approach of running a web proxy as a user-application on top of a
commodity UNIX-like operating system. We believe that our approach will result in lower software development
and maintenance costs, easier deployment and use, and therefore a quicker and wider acceptance of web proxies.
The main contributions of this article are:
ffl We study the overheads associated with file I/O in web proxies, investigate their underlying causes, and
propose WebCoSM, a set of new techniques that overcome file I/O limitations.
ffl We identify locality patterns that exist in web accesses, show that web proxies destroy these patterns, and
propose novel mechanisms that restore and exploit them.
ffl The applicability of our approach is shown through Foxy, a user-level web proxy implementation on top of
a commercial UNIX operating system.
ffl Comprehensiveperformancemeasurements using both simulation and experimental evaluation show that our
approach can easily achieve an order of magnitude performance improvement over traditional approaches.
6 Discussion
6.1 Reliability Issues
Although SQUID-like policies that store one URL per file do not perform well, they appear to be more robust in case
of a system crash than our WebCoSM approach, because they capitalize on the expensive reliability provided by
the file system. For example, after a system crash, SQUID can scan all the directories and files in its disk cache, and
recreate a complete index of the cached objects. On the contrary, WebCoSM methods store meta-data associated
with each URL in main memory for potentially long periods of time, increasing the cache's vulnerability to system
crashes. For example, STREAM stores all URLs in a single file, which contains no indication of the beginning
block of each URL. This information is stored in main memory, and can be lost in a system crash. Fortunately, the
seemingly lost reliability does not pose any real threat to the system's operation:
ffl WebCoSM methods can periodically (i.e. every few minutes) write their metadata information on safe
storage, so that in the case of a crash they will only lose the work of the last few minutes. Alternatively, they
can store along with each URL, its name, size, and disk blocks. In case of a crash, after the system reboots,
the disk can be scanned, and the information about which URLs exist on the disk can be recovered.
ffl Even if a few cached documents are lost due to a crash, they can be easily retrieved from the web server
where they permanently reside. Thus, a system crash does not lose information permanently; it just loses
the local copy of some data (i.e. a few minutes worth) which can be easily retrieved from the web again.
6.2 Lessons Learned
During the course of this research we were called to understand the intricate behavior of a web proxy system.
Some of the most interesting lessons that we learned include:
ffl User requests may invoke counter-intuitive operating system actions. For example, we observed that small
write requests in STREAM surprisingly invoked disk read operations. In this case, there is no intuitive and
direct correspondence between what the user requests and what the operating system actually does. This
mismatch between the user requests and the operating system actions not only hurts performance, but also
undermines the user's understanding of the system.
ffl System bottlenecks may appear in places least expected. It is a popular belief that proxies are bottlenecked
by their network subsystem. On the contrary, we found that the secondary storage management system
is also a major (if not more significant) bottleneck because it has not been designed to operate efficiently
with web workloads. For example, traditional file systems used to serve no more than 10 concurrent users,
requesting no more than 50 Kbytes per second each [3]. On the contrary, a busy web proxy (especially a
country-wide proxy), may be required to serve hundreds of concurrent users requesting data totaling several
Mbytes per second [13].
ffl Optimizing the performance of read operations will be one of the major factors in the design of secondary
storage management systems of web proxies. Write operations can usually be implemented efficiently and
proceed at disk bandwidth. On the contrary, read operations (even asynchronous ones), involve expensive
disk seek and rotational latencies, which are difficult if not impossible to avoid. As disk bandwidth improves
much faster than disk latency, read operations will become an increasing performance bottleneck.
ffl Locality can manifest itself even when not expected. We found that clients exhibit a significant amount
spatial locality, requesting sequences of related URLs. Traditional proxies tend to destroy this locality, by
interleaving requests arriving from different clients. Identifying and exploiting the existing locality in the
URL stream are challenging tasks that should be continuously pursued.
Summary
-Conclusions
In this paper we study the disk I/O overhead of world-wide web proxy servers. Using a combination of trace-driven
simulation and experimental evaluation, we show that busy web proxies are bottlenecked by their secondary
storage management subsystem. To overcome this limitation, we propose WebCoSM, a set of techniques tailored
for the management of web proxy secondary storage. Based on our experience in designing, implementing, and
evaluating WebCoSM, we conclude:
ffl The single largest source of overhead in traditional web proxies is the file creation and file deletion costs
associated with storing each URL on a separate file. Relaxing this one-to-one mapping between URLs and
files, improves performance by an order of magnitude.
ffl Web clients exhibit a locality of reference in their accesses because they usually access URLs in clusters. By
interleaving requests from several clients, web proxies tend to conceal this locality. Restoring the locality
in the reference stream results in better layout of URLs on the disk, reduces fragmentation, and improves
performance by at least 30%.
ffl Managing the mapping between URLs and files in user level improves performance over traditional web
proxies by a factor of 20 overall, leaving little room for improvement by specialized kernel-level implementations
We believe that our results are significant today and they will be even more significant in the future. As
disk bandwidth improves at a much higher rate than disk latency for more that two decades now [10], methods
like WebCoSM that reduce disk head movements and stream data to disk will result in increasingly larger
performance improvements. Furthermore, web-conscious storage management methods will not only result in
better performance, but they will also help to expose areas for further research in discovering and exploiting the
locality in the Web.
Acknowledgments
This work was supported in part by the Institute of Computer Science of Foundation for Research and Technology
-Hellas, in part by the University of Crete through project "File Systems for Web servers" (1200). We deeply
appreciate this financial support.
Panos Tsirigotis was a source of inspiration and of many useful comments. Manolis Marazakis and George
Dramitinos gave us useful comments in earlier versions of this paper. Katia Obraczka provided useful comments
in an earlier version of the paper. P. Cao provided one of the simulators used. We thank them all.
--R
Caching Proxies: Limitations and Potentials.
Measuring Proxy Performance with the Wisconsin Proxy Benchmark.
Measurements of a Distributed File System.
Speculative Data Dissemination and Service to Reduce Server Load
Heuristic Cleaning Algorithms for Log-Structured File Systems
Operating System Benchmarking in the Wake of Lmbench: A Case Study of the Performance of NetBSD on the Intel x86 Architecture.
A Hierarchical Internet Object Cache.
tracing revisited.
Serverless Network File Systems.
A Large-Scale Study of File System Contents
Prefetching Hyperlinks.
The Measured Access Characteristics of World-Wide-Web Client Proxy Caches
The Cyclic News Filesystem: Getting INN To Do More With Less.
The Global Internet Project.
The Zebra Striped Network File System.
File System Design for an NFS File Server Appliance.
Cache Flow Inc.
Inktomi Inc.
Workload Requirements for a Very High-Capacity Proxy Cache Design
Replacement Policies for a Proxy Cache
Performance Issues of Enterprise Level Web Proxies.
Reducing the Disk I/O of Web Proxy Server Caches.
Main Memory Caching of Web Documents.
A Fast File System for UNIX.
Speedier Squid: A Case Study of an Internet Server Performance Problem.
Caching in the Sprite Network File System.
A Trace-Driven Analysis of the UNIX 4.2 BSD File System
Using Predictive Prefetching to Improve World Wide Web Latency.
A Simple
Web Polygraph.
The Design and Implementation of a Log-Structured File System
On Performance of Caching Proxies.
The Second IRCache Web Cache Bake-off
A Case for Delay-Conscious Caching of Web Documents
An Implementation of a Log-Structured File System for UNIX
File Placement in a Web Cache Server.
Defining High Speed Protocols
Fast World-Wide Web Browsing Over Low-Bandwidth Links
Squid Internet Object Cache
Removal Policies in Network Caches for World-Wide Web Documents
Proxy Caching that Estimates Page Load Delays.
--TR
The design and implementation of a log-structured file system
Measurements of a distributed file system
The Zebra striped network file system
Disk-directed I/O for MIMD multiprocessors
Removal policies in network caches for World-Wide Web documents
Performance issues of enterprise level web proxies
Operating system benchmarking in the wake of <italic>lmbench</italic>
On performance of caching proxies (extended abstract)
Proxy caching that estimates page load delays
Measuring proxy performance with the Wisconsin Proxy benchmark
Let''s put NetApp and CacheFlow out of business!
The Cyclic News Filesystem
Storage Management for Web Proxies
Efficient Algorithms for Persistent Storage Allocation
Serverless network file systems
--CTR
Abdolreza Abhari , Sivarama P. Dandamudi , Shikharesh Majumdar, Web object-based storage management in proxy caches, Future Generation Computer Systems, v.22 n.1, p.16-31, January 2006 | web performance;web caching;web proxies;secondary storage |
611413 | Multicast-based inference of network-internal delay distributions. | Packet delay greatly influences the overall performance of network applications. It is therefore important to identify causes and locations of delay performance degradation within a network. Existing techniques, largely based on end-to-end delay measurements of unicast traffic, are well suited to monitor and characterize the behavior of particular end-to-end paths. Within these approaches, however, it is not clear how to apportion the variable component of end-to-end delay as queueing delay at each link along a path. Moreover, there are issues of scalability for large networks.In this paper, we show how end-to-end measurements of multicast traffic can be used to infer the packet delay distribution and utilization on each link of a logical multicast tree. The idea, recently introduced in [3] and [4], is to exploit the inherent correlation between multicast observations to infer performance of paths between branch points in a tree spanning a multicast source and its receivers. The method does not depend on cooperation from intervening network elements; because of the bandwidth efficiency of multicast traffic, it is suitable for large-scale measurements of both end-to-end and internal network dynamics. We establish desirable statistical properties of the estimator, namely consistency and asymptotic normality. We evaluate the estimator through simulation and observe that it is robust with respect to moderate violations of the underlying model. | Introduction
Background and Motivation. Monitoring the performance of large communications networks
is essential for diagnosing the causes of performance degradation. There are two broad approaches
to monitoring. In the internal approach, direct measurements are made at or between network
elements, e.g. of packet loss or delay. In the external approach, measurements are made across a
network on end-to-end or edge-to-edge paths.
The internal approach has a number of potential limitations. Due to the commercial sensitivity
of performance measurements, and the potential load incurred by the measurement process, it is
expected that measurement access to network elements will be limited to service providers and,
possibly, selected peers and users. The internal approach assumes sufficient coverage, i.e. that
measurements can be performed at all relevant elements on paths of interest. In practice, not all
elements may possess the required functionality, or it may be disabled at heavily utilized elements
in order reduce CPU load. On the other hand, arranging for complete coverage of larger networks
raises issues of scale, both in the in the gathering of measurement data, and joining data collected
from a large number of elements in order to form a composite view of end-to-end performance.
This motivates external approaches, network diagnosis through end-to-end measurements, without
necessarily assuming the cooperation of network elements on the path. There has been much
recent experimental work to understand the phenomenology of end-to-end performance (e.g., see
[3, 9, 19, 26, 27, 29]). Several research efforts are working on the developments of measurement
infrastructure projects (Felix [13], IPMA [15], NIMI [18] and Surveyor [35]) with the aim to collect
and analyze end-to-end measurements across a mesh of paths between a number of hosts.
Standard diagnostic tools for IP networks, ping and traceroute report roundtrip loss and de-
lay, the latter incrementally along the IP path by manipulating the time-to-live (TTL) field of probe
packets. A recent refinement of this approach, pathchar [17], estimates hop-by-hop link capac-
ities, packet delay and loss rates. pathchar is still under evaluation; initial experience indicates
many packets are required for inference leading to either high load of measurement traffic or long
measurement intervals, although adaptive approaches can reduce this [10]. More broadly, measurement
approaches based on TTL expiry require the cooperation of network elements in returning
Internet Control Message Protocol (ICMP) messages. Finally, the success of active measurement
approaches to performance diagnosis may itself cause increased congestion if intensive probing
techniques are widely adopted.
In response to some of these concerns, a multicast-based approach to active measurement has
been proposed recently in [4, 5]. The idea behind the approach is that correlation in performance
seen on intersecting end-to-end paths can be used to draw inferences about the performance characteristics
of the common portion (the intersection) of the paths, without the cooperation of network
elements on the path. Multicast traffic is particular well suited for this since a given packet only
occurs once on a given link in the (logical) multicast tree. Thus characteristics such as loss and
end-to-end delay of a given multicast packet as seen at different endpoints are highly correlated.
Another advantage of using multicast traffic is scalability. Suppose packets are exchanged on a
mesh of paths between a collection of N measurement hosts stationed in a network. If the packets
are unicast, then the load on the network may grow proportionally to N 2 in some parts of the
network, depending on the topology. For multicast traffic the load grows proportionally only to N .
Contribution The work of [4, 5] showed how multicast end-to-to measurements can be used to
per link loss rates in a logical multicast tree. In this paper we extend this approach to infer the
probability distribution of the per link variable delay. Thus we are not concerned with propagation
delay on a link, but rather the distribution of the additional variable delay that is attributable to
either queuing in buffers or other processing in the router. A key part of the method is an analysis
that relates the probabilities of certain events visible from end-to-end measurements (end-to-end
delays) to the events of interest in the interior of the network (per-link delays). Once this relation
is known, we can estimate the delay distribution on each link from the measured distributions of
end-to-end delays of multicast packets.
For a glimpse of how the relations between end-to-end delay and per link delays could be
found, consider a multicast tree spanning a source of multicast probes (identified as the root of the
tree) and a set of receivers (one at each leaf of the tree). We assume the packets are potentially
subject to queuing delay and even loss at each link. Focus on a particular node k in the interior of
the tree. If, for a given packet, the source-to-leaf delay does not exceed a given value on any leaf
descended from k, then clearly the delay from the root to the node k was less than that value. The
stated desired relation between the distributions of per-link and source-to-leaf delays is obtained
by a careful enumeration of the different ways in which end-to-end delay can be split between the
portion of the path above or below the node in question, together with the assumption that per-link
delays are independent between different links and packets. We shall comment later upon the
robustness of our method to violation of this independence assumption.
We model link delay by non-parametric discrete distributions. The choice of non parametric
distributions rather than a parameterized delay model is dictated by the lack of knowledge of
the distribution of link delays in networks. While there is significant prior work on the analysis
and characterization of end-to-end delay behavior (see [2, 24, 27]), to the best of our knowledge
there is no general model for per link delays. The use of a non-parametric model provides the
flexibility to capture broadly different delay distributions, albeit at the cost of increasing the number
of quantities to estimate (i.e. the weights in the discrete distribution). Indeed, we believe that our
inference technique can shed light on the behavior and dynamics of per link delays and so provide
useful results for the analysis and modeling; this we will consider in future work.
The discrete distribution can be a regarded as binned or discretized version of the (possibly
continuous) true delay distribution. Use of a discrete rather than a continuous distribution allows
us to perform the calculations for inference using only algebra. Formally, there is no difficulty in
formulating a continuous version of the inference algorithm. However, it proceeds via inversion
of Laplace transforms, a procedure that is in practice implemented numerically. In the discrete
approach we can explicitly trade-off the detail of the distribution with the cost of calculation; the
cost is inversely proportional to the bin widths of the discrete distribution.
The principle results of the analysis are as follows. Based on the independent delay model,
we derive an algorithm to estimate the per link discrete delay distributions and utilization from the
measured end-to-end delay distributions. We investigate the statistical properties of the estimator,
and show it to be strongly consistent, i.e., it converges to the true distribution as the number of
probes grows to infinity. We show that the estimator is asymptotically normal; this allows us to
compute the rate of convergence of the estimator to its true value, and to construct confidence
intervals for the estimated distribution for a given number of probes. This is important because the
presence of large scale routing fluctuation (e.g. as seen in the Internet; see [26]) sets a timescale
within which measurement must be completed, and hence the accuracy that can be obtained when
sending probes at a given rate.
We evaluated our approach through extensive simulation in two different settings. The first set
used a model simulation in which packet delays obey the independence assumption of the model.
We applied the inference algorithm to the end-to-end delays generated in the simulation and compared
the (true) model delay distribution. We verified the convergence to the model distribution,
and also the rate of convergence, as the number of probes increased.
In the second set of experiments we conducted an ns simulation of packets on a multicast tree.
Packet delays and losses were entirely due to queueing and packet discard mechanisms, rather than
model driven. The bulk of the traffic in the simulations was background traffic due to TCP and
UDP traffic sources; we compared the actual and predicted delay distributions for the probe traffic.
Here we found rapid convergence, although with some persistent differences with respect to the
actual distributions.
These differences appear to be caused by violation of the model due to the presence of spatial
dependence (i.e., dependence between delays on different links). In our simulations we find
that when this type of dependence occurs, it is usually between the delays on child and parent
links. However, it can extend to entire paths. As far as we know there are no experimental results
concerning the magnitude of such dependence in real networks. In any case, by explicitly introducing
spatial correlations into the model simulations, we were able to show that small violations
of the independence assumption lead to only small inaccuracies of the estimated distribution. This
continuity property of the deformation in inference due to correlations is also to be expected on
theoretical grounds.
We also verified the presence of temporal dependence, i.e., dependence between the delays
between successive probes on the same link. This is to be expected from the phenomenology of
queueing: when a node is idle, many consecutive probes can experience constant delay; during
congestion, probes can experience the same delay if their interarrival time is smaller than the congestion
timescale. This poses no difficulty as all that is required for consistency of the estimator is
ergodicity of the delay process, a far weaker assumption than independence. However, dependence
can decrease the rate of convergence of the estimators. In our experiments, inferred values closely
tracked the actual ones despite the presence of temporal dependence.
Implementation Requirements Since the data for delay inference comprises one-way packet
delays, the primary requirement is the deployment of measurement hosts with synchronized clocks.
Global Positioning System (GPS) systems afford one way to achieve a synchronization to within
tenths of microseconds; it is currently used or planned in several of the measurement infrastructures
mentioned earlier. More widely deployed is the Network Time Protocol (NTP) [20]. However, this
provides accuracy only on the order of milliseconds at best, a resolution at least as coarse as the
queueing delays in practice. An alternative approach that could supplement delay measurement
from unsynchronized or coarsely synchronized clocks has been developed in [28, 30, 21]. These
authors propose algorithms to detect clock adjustments and rate mismatches and to calibrate the
delay measurements.
Another requirement is knowledge of the multicast topology. There is a multicast-based measurement
tool, mtrace [23], already in use in the Internet. mtrace reports the route from a
multicast source to a receiver, along with other information about that path such as per-hop loss
and rate. Presently it does not support delay measurements. A potential drawback for larger
topologies is that mtrace does not scale to large numbers of receivers as it needs to run once for
each receiver to cover the entire multicast tree. In addition, mtrace relies on multicast routers
responding to explicit measurement queries; a feature that can be administratively disabled. An
alternative approach that is closely related to the work on multicast-based loss inference [4, 5] is to
infer the logical multicast topology directly from measured probe statistics; see [31] and [7]. This
method does not require cooperation from the network.
Structure of the Paper. The remaining sections of the paper are organized as follows. In Section
2 we describe the delay model and in Section 3 we derive the delay estimator. In Section 4 we
describe the algorithm used to compute the estimator from data. In Section 5 we present the model
and network simulations used to evaluate our approach. Section 6 concludes the paper.
Model & Framework
2.1 Description of the Logical Multicast Tree
We identify the physical multicast tree as comprising actual network elements (the nodes) and the
communication links than join them. The logical multicast tree comprises the branch points of the
physical tree, and the logical links between them. The logical links comprise one or more physical
links. Thus each node in the logical tree, except for the leaf nodes and possibly the root, must
have 2 or more children. We can construct the logical tree from the physical tree by deleting all
links with one child (except for the root) and adjusting the links accordingly by directly joining its
parent and child.
denote the logical multicast tree, consisting of the set of nodes V , including
the source and receivers, and the set of links L, which are ordered pairs (j; of nodes, indicating
a link from j to k. We will denote f0g. The set of children of node j is denoted by
these are the nodes whose parent is j. Nodes are said to be siblings if they have the same
parent. For each node j, other than the root 0, there is a unique node f(j), the parent of j, such
that (f(j); Each link can therefore be also identified by its "child" endpoint. We shall
define f n (k) recursively by f n We say that j is a descendant of
the corresponding partial order in V as j OE k.
For each node j we define its level '(j) to be the non-negative integer such that f '(j)
root represents the source of the probes and the set of leaf nodes R ae V (i.e., those with no
children) represents the receivers.
2.2 Modeling Delay and Loss of Probe Packets
Probe packets are sent down the tree from the root node 0. Each probe that arrives at node k results
in a copy being sent to every child of k. We associate with each node k a random variable D k taking
values in the extended positive real line R+ [ f1g. By convention D is the random
delay that would be encountered by a packet attempting to traverse the link (f(k); L. The
value indicates that the packet is lost on the link. We assume that the D k are independent.
The delay experienced on the path from the root 0 to a node k is Y
. Note that Y
i.e. if the packet was lost on some link between node 0 and k.
Unless otherwise stated, we will discretize each link delay D k to a set f0; q;
Here q is the bin width, is the number of bins, and the point 1 is interpreted as "packet
lost" or "encountered delay greater than i max q". The distribution of D k is denoted by ff k , where
the probability that D 1. For each link, we denote u k the
link utilization; then, u (0), the probability that a packet experience delay or it is lost in
traversing link k.
For each k 2 V , the cumulative delay process Y k , k 2 V , takes values in f0; q;
i.e., it supports addition in the ranges of the constituent D j . We set A k
A k (1) the probability that Y 1. Because of delay independence, for finite i, A k
by convention A 0
We consider only canonical delay trees. A delay tree consists of the pair (T ; ff),
delay tree is said to be canonical if ff k (0) ? 0, 8k 2 U , i.e., if there
is a non-zero probability that a probe experiences no delay in traversing each link.
3 Delay Distribution Estimator and its Properties
Consider an experiment in which n probes are sent from the source node down the multicast tree.
As result of the experiment we collect the set of source-to-leaf delays (Y k;l ) k2R;l=1;:::;n . Our goal is
to infer the internal delay characteristics solely from the collected end-to-end measurements.
In this section we state the main analytic results on which inference is based. In Section 3.1
we establish the key property underpinning our delay distribution estimator, namely the one-to-
one correspondence between the link delay distributions and the probabilities of a well defined set
of observable events. Applying this correspondence to measured leaf delays allows us to obtain
an estimate of the link delay distribution. We show that the estimator is strongly consistent and
asymptotically normal. In Section 3.2 we present the proof of the main result which also provides
the construction of the algorithm to compute the estimator we present in Section 4. In Section 3.4
we analyze the rate of convergence of the estimator as the number of probes increase.
3.1 The Delay Distribution Estimator
denote the subtree rooted at node k and
of receivers which descend from k.
denote the event fmin j2R(k) Y j iqg that the
end-to-end delay is no greater than iq for at least least one receiver in R(k) . Let fl k
P[\Omega k (i)] denote its probability. Finally let \Gamma denote the mapping associating the link distributions
k2U;i2f0;:::;imaxg to the probabilities of the
. The
proof of the next result is given in the following section.
Theorem 1 Let
\Gamma(ff)g. \Gamma is a bijection from A to G which is
continuously differentiable and has a continuously differentiable inverse.
Estimate fl by the empirical probabilities bfl , where
denotes the indicator function of the set S and ( b
are the subsidiary quantities
Our estimate of ff k (i) is b
(i). We estimate link k utilization by b
Let A
denote the open
interior of A. The following holds:
Theorem 2 When
almost surely to ff, i.e., the
estimator is strongly consistent.
is continuous on \Gamma(A (1) ) and A (1) is open in A, it follows that \Gamma(A (1) ) is an
open set in \Gamma(A). By the Strong Law of large numbers, since bfl is the mean of n independent
random variables, bfl converges to fl almost surely for n !1. Therefore, when
exists n 0 such that bfl 2 \Gamma(A (1) Then, the continuity of \Gamma \Gamma1 insures that b
ff converges
almost surely to ff as n !1.
3.2 Proof of Theorem 1
To prove the Theorem, we first express fl as function of ff and then show that the mapping from A
to G is injective.
3.2.1 Relating fl to ff
Denote obeys the recursion
Then, by observing that
readily obtain
The set of equations (5) completely identifies the mapping \Gamma from A to G. The mapping is clearly
continuously differentiable. Observe that the above expressions can be regarded as a generalization
of those derived for the loss estimator in [4] (by identifying the event no delay with the event no
loss).
3.2.2 Relating ff to fl
It remains to show that the mapping from A to G is injective. To this end, below we derive an
algorithm for inverting (5). We postpone to Appendix A the proof that the inverse is unique and
continuously differentiable. For sake of clarity we separate the algorithm into two parts: in the first
we derive the cumulative delay distributions A from fl; then, we deconvolve A to obtain ff.
Computing A
Step 0:
Solve (5) for amounts solving the equation
Y
and
This equation is formally identical to the one of the loss estimator [4]. From [4], we have that the
solution of (6) exists and is unique in (0; 1) provided that
which holds
for canonical delay trees. We then compute fi k
Step i:
Given A k (j) and fi k (j), k 2 U , 1, in this step we compute A k (i) and fi k (i),
. For k 2 U n R, in expression (5) we replace fi d (i) with fl d (i)\Gamma
A k (0)
(from (4)) and obtain the following equation
ae Q
A k (0)
oe
(the unknown term A k (i) is highlighted in boldface). This is a polynomial in A k (i) of degree
#d(k). As shown in Appendix A we consider the second largest solution of (8).
For directly compute A k (i) from (5), A k
(j). Then we
compute
A f(k) (0)
Computing ff
Once step i max is completed, we compute ff k (i), k 2 U as follows
A k (0)
A
A k (i)\Gamma
A
3.3 Example: the Two-leaf Tree
In this section we illustrate the application of the results of Section 3.1 to the two-leaf tree of
Figure
1. We assume that on each link, a probe either suffers no delay, a unit amount of delay, or
is otherwise lost; for k 2 f1; 2; 3g, therefore, delay takes values in f0; 1; 1g.
For this example, equations (6) and (8) can be solved explicitly; combined with (9) we obtain
Figure
1: TWO-LEAF MULTICAST TREE.0752
Figure
2: FOUR-LEAF MULTICAST TREE.
the estimates
3.4 Rates of Convergences of the Delay Distribution Estimator
3.4.1 Asymptotic Behavior of the Delay Distribution Estimator
In this section, we study the rate of convergence of the estimator. Theorem 2 states that b
ff converges
to ff with probability 1 as n grows to infinity; but it provides no information on the rate of
convergence. Because of the mild conditions satisfied by \Gamma \Gamma1 , we can use Central Limit Theorem
to establish the following asymptotic result
Theorem 3 When
converges in distribution to a multivariate
normal random variable with mean vector 0 and covariance matrix
denotes the transpose.
Proof: By the Central Limit Theorem, it follows that the random variables bfl are asymptotically
Gaussian as n !1 with
Here D denotes convergence in distribution. Following the same lines of the proof of Theorem
continuously differentiable on G, the Delta method (see Chapter 7 of [34]) yields that b
is also asymptotically Gaussian as n !1:
Theorem 3 allows us to compute confidence intervals of the estimates, and therefore their
accuracy and their convergence rate to the true values as n grows. This is relevant in assessing:
(i) the number of probes required to obtain a desired level of accuracy of the estimate; (ii) the
likely accuracy of the estimator from actual measurements by associating confidence intervals to
the estimates.
For large n, the estimator b
ff k (i) will lie in the interval
r (k;i)(k;i)
where z ffi=2 is the quantile of the standard distribution and the interval estimate is a 100(1 \Gamma
ffi)% confidence interval.
To obtain the confidence interval for b
ff derived from measured data from n probes, we estimate
by b
and D(bff) is the Jacobian of the inverse computed for
ff. We then use confidence
intervals of the form
3.4.2 Dependence of the Delay Distribution Estimator on Topology
The estimator variance determines the number of probes required to obtain a given level of ac-
curacy. Therefore, it is important to understand how the variance is affected by the underlying
a
(a)0.20.61
a
(b)
Figure
3: ASYMPTOTIC ESTIMATOR VARIANCE AND TREE DEPTH. Binary tree with depth 2, 3
and 4. Left: Minimum and Maximum Variance of the estimates b ff k (0) (a) and b ff k (1) (b) over all
links.
parameters, namely the delay distributions and the multicast tree topology. The following Theo-
rem, the proof of which we postpone to Appendix C, characterizes the behavior of the variance for
small delays. Set k ff
Theorem 4 As k ff k ! 0,
Theorem 4 states that the estimator variance is, to first order, independent of the topology. To
explore higher order dependencies, we computed the asymptotic variance for a selection of trees
with different depths and branching ratio. We use the notation T (r to denote a tree of
apart from node 0 that has one descendent, nodes at level j have exactly
children. For simplicity, we consider the case when link delay takes values in f0; 1g, i.e., we
consider no loss, and study the behavior as function of ff k
In
Figure
3 we show the dependence on tree depth for binary trees of depth 2, 3 and 4. We plot
the maximum value of the variance over the links max k Var(bff k (0)) (a) and max k Var(bff k (1)) (b).
In these examples, the variance increases with the tree depth. In Figure 4 we show the dependence
Variance
a
(a)0.20.61
Variance
a
(b)
Figure
4: ASYMPTOTIC ESTIMATOR VARIANCE AND BRANCHING RATIO. Binary tree with
depth 2 and 2, 4 or 6 receivers. Left: Variance of b ff k (0) (a) and b
ff k (1) (b) for link 1 (common link)
and 2 (generic receiver).
on branching ratio for a tree of level 2. We plot the estimator variance for both link 1 (the common
link) and link 2 (a generic receiver). In these examples, increasing the branching ratio decreases
the variances, especially those of the common link estimates which increases less than linearly for
ff up to 0.7 when the branching ratio is larger than 3. In all cases, the variance of b
ff k (1) is larger
than b
In all cases, as predicted by Theorem 4, the estimator variance is asymptotically linear in ff
independently of the topology as ff ! 0. As ff increases, the behavior is affected by different
factors: increasing the branch ratio results in a reduction of the variance, while increasing the tree
depth results in variance increase. The first can be explained in terms of the increased number of
measurements available for the estimation as the number of receivers sharing a given link increases;
the second appears to be the effect of cumulative errors that accrue as the number of links along a
path increases (ff is computed iteratively on the tree). We also observe that the variance increases
with the delay lag; this appears to be caused by the iterative computation on the number of bins
that progressively cumulate errors.
4 Computation of the Delay Distribution Estimator
In this section we describe an algorithm for computing the delay distribution estimate from measurements
based on the results presented in the previous section. We also discuss its suitability for
distributed implementation and how to adapt the computation to handle different bin sizes.
We assume the experimental data of source-to-leaf delays (Y k;m ) k2R;m=1;:::n from n probes, as
collected at the leaf nodes k 2 R. Two steps must be initially performed to render the data into a
form suitable for the inference algorithms: (i) removal of fixed delays and (ii) choosing a bin size
q and computing the estimate bfl .
The first step is necessary since it is generally not possible to apportion the deterministic component
of the source-to-leaf delays between interior links. (To see this, it is sufficient to consider
the case of the two receiver tree; expressing the link fixed delays in terms of the source-to-leaf
fixed delays results in two equations in three unknowns). Thus we normalize each measurement
by subtracting the minimum delay seen at the leaf. Observe that to interpret the observed minimum
delay as the transmission delay assumes that at least one probe has experienced no queuing delay
along the path).
The second step is to choose the bin size q and discretize the delays measurements accordingly.
This introduces a quantization error which affects the accuracy of the estimates. As our results have
shown, the accuracy increases as q decreases (we have obtained accurate results over a significant
range of values of q up to the same order of magnitude of the links average delay). The choice of
q represents a trade-off between accuracy and cost of the computation as a smaller bin size entails
a higher computational cost due to the higher dimensionality of the binned distributions.
These two steps are carried out as follows. From the measured data (Y k ) k2R , we recursively
construct the auxiliary vector process b
m2f1;:::;ng
The binned estimates of bfl are
with
Y k;m
Here dxe denotes the smallest integer greater than x and N k 1g.
Observe that i max represents the largest value at which the estimate b
The estimate can be computed iteratively over the delay lag and recursively over the tree.
The pseudo code for carrying out the computation is found in Figure 5. The procedure find y
calculates b
Y k and bfl k , with b
Y k;l initialized to Y k;l \Gamma minm2f1;:::;ng Y k;m for k 2 R and 1 (a value
procedure main f
find y
foreach
procedure find y
foreach
foreach ng )
foreach
return b
procedure infer delay ( k, i
A k [i] ==
else f
A k [0]
ae Q
oe
A k [j]
A
A f(k) [0]
A
A f(k) [0]
foreach
Figure
5: PSEUDOCODE FOR INFERENCE OF DELAY DISTRIBUTION.
larger than any observed delay suffices) otherwise. The procedure infer delay calculates b
for a fixed i recursively on the tree, with b
initialized to 0, except for
A 0 [0] set to 1. The output of the algorithm are the estimates b
Within the code, an empty product (which occurs when the first argument of infer is a leaf)
is assumed to be zero. The routines solvefor1 and solvefor2 return the value of the first
symbolic argument that solves the equation in the second argument. solvefor1 returns a solution
in (0; 1); from Lemma 1 in [4] this is known to be unique. solvefor2 returns the unique
solution if the second argument is linear in b
A k (i) ( this happen only if k is a leaf-node), otherwise
it returns the second largest solution.
4.1 Distributed Implementation
As with the loss estimator [4] the algorithm is recursive on trees. In particular, observe that the
computation of bfl and b
A k only requires the knowledge of ( b
are computed
recursively on the the tree starting from the receivers. Therefore it is possible to distribute the
computation among the nodes of the tree (or representative nodes of subtrees), with each node k
being responsible for the aggregation of the measurements of its child nodes through (14) and for
the computation of b
A k .
4.2 Adopting Different Bin Sizes
Following the results of the previous section, we presented the algorithm using a fixed value of q
for all links. This can be quite restrictive in a heterogeneous environment, where links may differ
significantly in terms of speed and buffer sizes; a single value of q could be at the same time too
coarse grained for describing the delay of a high bandwidth link but too fine-grained to efficiently
capture the essential characteristics of the delay experienced along a low bandwidth link.
A simple way to overcome this limitation is to run the algorithm for different values of q, each
best suited for the behavior of a different group of links, and retain each time only the solutions for
those links. A drawback of this approach is that each distribution is computed for all the different
bin sizes. The distributed nature of the algorithm suggests we can do better; indeed, since A k ,
can be computed independently from one another, it is possible to compute each link
delay distribution only for the bin size best suited to its delay characteristics. More precisely, let
q k denote the bin size adopted for link k. In order to compute b
ff k with bin size q k we need to
compute both b
A k and b
A f(k) with bin size q k . Thus, the overall computation requires calculating
each cumulative distribution b
A k only for the bin sizes q j , only for the bin
sizes adopted for the links terminating at node k and at all its child nodes rather than for bin sizes
adopted for all links.
In an implementation, we envision that a fixed value for all links is used first. This can be
chosen based on the measurements spread and the tree topology or delay past history. Then, with
a better idea of each link delay spread, it would be possible to refine the value of the bin size on a
link by link basis.
5 Experimental Evaluation
We evaluated our delay estimator through extensive simulation. Our first set of experiments focus
on the statistical properties of the estimator. We perform model simulation, where delay and loss
are determined by random processes that follow the model on which we based our analysis. In
our second set of experiments we we investigate the behavior of the estimators in a more realistic
setting where the model assumption of independence may be violated. To this end, we perform
TCP/UDP simulation, using the ns simulator. Here delay and loss are determined by queuing
delay and queue overflows at network nodes as multicast probes compete with traffic generated by
TCP/UDP traffic sources.
5.1 Comparing Inferred vs. Sample Distributions
Before examining the results of our experiments, we describe our approach to assessing the accuracy
of the inferred distributions. Given an experiment in which n probes are sent from the source
to the receivers, for k 2 V , the inferred distribution b
is computed from the end-to-end measurements
using the algorithm described in Section 4. Its accuracy must be measured against the
actual data, represented by a finite sequence of delays fD k;m g n
experienced by
the probes in traversing (reaching) that link. For simplicity of notation we assume, hereafter, that
each set of data has been already normalized by subtracting the minimum delay from the sequence.
We compare summary statistics of link delay, namely the mean and the variance. A finer evaluation
of the accuracy lies in a direct comparison of the inferred and sample distributions. To this
end, we also compute the largest absolute deviation between the inferred and sample c.d.f.s. This
measure is used in statistics for the Kolmogoroff-Smirnoff test for goodness of fit of a theoretical
with a sample distribution. A small value for this measure indicates that the theoretical distribution
provides a good fit to the sample distribution; a large value leads to the rejection of the hypoth-
esis. We cannot directly apply the test as we deal with an inferred rather than a sample c.d.f.;
however, we will use the largest absolute deviation as a global measure of accuracy of the inferred
distributions.
We compute the sample distributions ~ ff and ~
A using the same bin size q of the estimator. More
precisely, we compute (~ff k ) k2V and ( ~
(Observe that in computing (~ff k ) k2V , the sum
is carried out only over N f(k) 1g, the set over which the delay
along link k is defined either finite or infinite.)
The largest absolute deviation between the inferred and sample c.d.f.s is, then,
ff k (i)j. In other words, \Delta k is the smallest nonnegative number such that
lies between
. The same result holds for the tail
probabilities,
(a)0.050.150.250 2000 4000 6000 8000 10000
a
(1)
n. of probes
link 1
link 2
link 3
a k (1)=0.2
(b)
Figure
Simulation topology. (b): Convergence of b
ff k (1) to ff k (1).
a
(1)
n. of probes
a 1 (1)
a 1 (1) 2 std
a 2 (1)
a 2 (1) 2 std
(a)0.050.150.250 2000 4000 6000 8000 10000
a
(1)
n. of probes
a 1 (1)=a 2 (1)
a 1 (1) 2 std
a 2 (1) 2 std
(b)
Figure
7: AGREEMENT BETWEEN SIMULATED AND THEORETICAL CONFIDENCE INTERVALS.
(a): Results from 100 model simulations. (b): Prediction from (10). The graphs show two-sided
confidence interval at 2 standard deviation for link 1 and 2. Parameters are ff k
links.
5.2 Model Simulation
We first consider the two-leaf topology of Figure 6(a), with source 0 and receivers 2 and 3. Link
delays are independent, taking values in f0; 1; 1g; if a probe is not lost it experiences either
no delay or unit delay. In Figure 6(b) we plot the estimate b
versus the model values for a
run comprising 10000 probes. The estimate converges within to 2% of the model value within
4000 probes. In Figure 7 we compare the empirical and theoretical 95% confidence intervals.
The theoretical intervals are computed from (10). The empirical intervals are computed over 100
independent simulations. The agreement between simulation and theory is close: the two sets of
curves are almost indistinguishable.
Next we consider the topology of Figure 8. Delays are independently distributed according
to a truncated geometric distribution taking values in f0; (in ms) . This topology
is also used in subsequent TCP/UDP simulations, and the link average delay and loss probability
are chosen to match the values obtained from these. The average delay range between 1 and 2ms
for the slower edge links and between 0:2 and 0:5ms for the interior faster links; the link losses
range from 1% to 11%. In Figure 9 we plot the estimated average link delay and standard deviation
with the empirical 95% confidence interval computed over 100 simulations. The results are very
accurate even for several hundred probes: the theoretical average delay always lies within the
confidence interval and the standard deviation does so for 1500 or more probes.
To compare the inferred and sample distributions, we computed the largest absolute deviation
between the inferred and sample c.d.f.s. The results are summarized in Figure 10 where we plot
the minimum, median and the maximum largest absolute deviation in 100 simulations computed
over all links as a function of n (a) and link by link for (b). The accuracy increases with
the number of probes as 1=
n with a spread of two orders of magnitude between the minimum
and maximum. For more than 3000 probes, the average largest deviation over all links is less
than 1%. The accuracy varies from link to link: when the number of probes is
at one extreme we have link 4 with 0:18% \Delta 4 0:8% and at the other extreme link 6 with
simulations. We observe that the inferred distributions are less accurate
as we go down the tree. This is in agreement with the results of Section 3.4 and is explained in
terms of the larger inferred probabilities variances of downstream with respect to upstream nodes.
5.3 TCP/UDP Simulations
We used the topology shown in Figure 8. To capture the heterogeneity between edges and core
of a WAN, interior links have higher capacity (5Mb/sec) and propagation delay (50ms) then at the
1Mb/sec, 10ms
5Mb/sec, 50ms6 79 11
Figure
8: Simulation Topology: Link are of two types: edge links of 1MB/s capacity and 10ms
latency, and interior links of 5Mb/s capacity and 50ms latency.0.20.611.41.82.2
Average
Delay
(ms)
n. of probes
link 1 - estimated
link 6 - estimated
link 8 - estimated
link 11 - estimated
link 1 - model
link 6 - model
link 8 - model
link 11 - model
(a)12345
Standard
Deviation
(ms)
n. of probes
link 1 - estimated
link 6 - estimated
link 8 - estimated
link 11 - estimated
link 1 - model
link 6 - model
link 8 - model
link 11 - model
(b)
Figure
9: MODEL SIMULATION: TOPOLOGY OF FIGURE 8. ESTIMATED VERSUS THEORETICAL
DELAY AVERAGE AND STANDARD DEVIATION WITH 95% CONFIDENCE INTERVAL COMPUTED
OVER 100 MODEL SIMULATIONS.
1e-031e-010 2000 4000 6000 8000 10000
Largest
Absolute
Vertical
Deviation
n. of probes
Maximum
Median
Minimum
(a)1e-031e-01
Largest
Absolute
Vertical
Deviation
Link
Maximum
Median
Minimum
(b)
Figure
10: MODEL SIMULATION: TOPOLOGY OF FIGURE 8. ACCURACY OF THE ESTIMATED
DISTRIBUTION. LARGEST VERTICAL ABSOLUTE DEVIATION BETWEEN ESTIMATED AND
SAMPLE C.D.F. Minimum, median and the maximum largest absolute deviation in 100 simulations
computed over all links as function of n (a) and link by link for
edge (1Mb/sec and 10ms). Each link is modeled as a FIFO queue with a 4-packet capacity.
probes as a 20Kbit/s stream comprising 40 byte UDP packets according to
a Poisson process with a mean interarrival time of 16ms; this represents 2% of the smallest link
capacity. Observe that even for this simple topology with 8 end-points, a mesh of unicast measurements
with the same traffic characteristics would require an aggregate bandwidth of 160Kbit/s at
the root. The background traffic comprises a mix of infinite data source TCP connections
and exponential on-off sources using UDP. Averaged over the different simulations, the link loss
ranges between 1% and 11% and link utilization ranges between 20% and 60%.
For a single experiment, Figure 11 compares the estimated versus the sample average delay
for representative selected links. The analysis has been carried out using 1ms (a) and
0:1ms (b). In this example, we practically obtain the same accuracy despite a tenfold difference
in resolution. (Observe that 1ms is of the same order of magnitude of the average delays.)
The inferred averages rapidly converge to the sample averages even though we have persistent
systematic errors in the inferred values due to consistent spatial correlation. We shall comment
upon this later.
In order to show how the inferred values not only quickly converge, but also exhibit good dynamics
tracking, in Figure 12 we plot the inferred versus the sample average delay for 3 links (1,
3 and 10) computed over a moving window of two different sizes with jumps of half its width. To
allow greater dynamics, here we arranged background sources with random start and stop times.
Under both window sizes (approximately 300 and 1200 probes are used, respectively), the esti-
0Average
Delay
(ms)
n. of probes
link 1 - estimated
link 6 - estimated
link 9 - estimated
link 11 - estimated
link 1 - sample
link 6 - sample
link 9 - sample
link 11 - sample
(a)0.51.52.5
Average
Delay
(ms)
n. of probes
link 1 - estimated
link 6 - estimated
link 9 - estimated
link 11 - estimated
link 1 - sample
link 6 - sample
link 9 - sample
link 11 - sample
(b)
Figure
TCP/UDP SIMULATIONS. (a): bin-size 0:1ms. The graphs shows
how the inferred values closely track the sample average delays.0.51.52.53.54.5
Average
Delay
(ms)
Seconds
link 1 - estimated
link 3 - estimated
link estimated
link 1 - sample
link 3 - sample
link
(a)0.51.52.53.54.5
Average
Delay
(ms)
Seconds
link 1 - estimated
link 3 - estimated
link estimated
link 1 - sample
link 3 - sample
link
(b)
Figure
12: DYNAMIC ACCURACY OF INFERENCE. Sample and Inferred average delay on links
of the multicast tree in Figure 8. (a): 5 seconds window. (b): 20 seconds windows.
Background traffic has random start stop times.
RMS
normalized
error
(Average
n. of probes
link 1
link 3
link 5
link 6
link
link 11
(a)0.050.150.250.350.450 2000 4000 6000 8000 10000
RMS
normalized
error
(Average
n. of probes
link 1
link 3
link 5
link 6
link
link 11
(b)
Figure
13: ACCURACY OF INFERENCE: AVERAGE DELAY. Left: 1ms. Right: 0:1ms.
The graphs show the normalized Root Mean Square error between the estimated and sample average
delay over 100 simulations.
mates of the average delays of links 1 and 10 show good agreement and a quick response to delay
variability revealing a good convergence rate of the estimator. For link 3 with a smaller average
delay, the behavior is rather poor, especially for the 5 seconds windows size.
For a selection of links, in Figure 13 we plot the Root Mean Square (RMS) normalized error
between the estimated and sample average delays calculated over 100 simulations using
and 0:1ms. The two plots demonstrate that the error drops significantly up to 2000 probes
after which it becomes almost constant. In this example, increasing the resolution by a factor of
ten improves, although not significantly, the overall accuracy of the estimates especially for those
links that enjoy smaller delays. After 10000 probes the relative error ranges from 1% to 23%. The
higher values occur when link average delays are small due to the fact that for these links the same
absolute error results in a more pronounced relative error.
The persistence of systematic errors we observe in Figure 13 is due to the presence of spatial
correlation. In our simulations, a multicast probe is more likely to experience similar level of congestion
on consecutive links or on sibling links than is dictated by the independence assumption.
We also verified the presence of temporal correlation among successive probes on the same link
caused by consecutive probes experiencing the same congestion level at a node.
To assess the extent to which our real traffic simulations violate the model assumptions, we
computed the delay correlation between links pairs and among packets on the same link. The analysis
revealed the presence of significant spatial correlations up to 0:3 0:4 between consecutive
links. The smallest values are observed for link 5 which always exhibits a correlation with its
parent node that lies below 0.1. From Figure 13 we verify that, not surprisingly node 5 enjoys
the smallest relative error. We believe that these high correlations are a result of the small scale
of the simulated network. We have observed smaller correlations in large simulations as would be
expected in real networks because of the wide traffic and link diversity.
The autocorrelation function rapidly decreases and can be considered negligible for a lag larger
than (approximatively 2 seconds). The presence of short-term correlation does not alter the key
property of convergence of the estimator as it suffices that the underlying processes be stationary
and ergodic (this happens for example, when recurrence conditions are satisfied). The price of
correlation, however, is that the convergence rate is slower than when delay are independent.
Now we turn our attention to the inferred distributions. For an experiment of 300 seconds
during which approximately 18000 probes were generated, we plot the complementary c.d.f. conditioned
on the delay being finite in Figures 14. In Figure 15 we also plot the complement c.d.f
of the node cumulative delay. (we show only the internal links as b
1ms.
From these two sets of plots, it is striking to note the differences between the accuracy of the
estimated cumulative delay distributions b
A k and the estimated link delay distributions b
while the
former are all very close to the actual distributions, the latter results are inaccurate in many cases.
This is explained by observing that in presence of significant correlations, the convolution among
A k , ff k , and A f(k) , used in the model, does not well capture the relationship between the actual
distributions. We verified this by convolving ff k and A f(k) and comparing the result with A k ; as
expected, in the presence of strong local correlation, the results exhibit significant differences that
account for the discrepancies of the inferred distributions. Nevertheless, results should be affected
in a continuous way with small violations leading to small inaccuracies. Indeed, we have good
agreement for the inferred distributions of links 4, 5, 10 and 12 that are the nodes with smallest
spatial correlations. Unfortunately it is not easy to determine whether the correlations are strong
and therefore assess the expected accuracy of the estimates, even though pathological shapes of the
inferred distributions could provide evidence of strong local correlations 1 . A solution could be the
extension of the model to explicitly account for the presence of spatial correlation in the analysis.
This will be the focus of future research.
The accuracy of the inferred cumulative delay distributions, on the other hand, derives from
the fact that even in presence of significant local correlations, equation (8), which assumes inde-
1 To this end, we observed that under strong spatial correlation inaccuracies of the estimator b
ff are often associated
to the existence of significant increasing behavior portions in the complement c.d.f. that reveals the presence of
negative inferred probabilities with possibly non negligible absolute values.
Complement
of
the
Cumulative
Density
Function
Delay (ms)
Estimated vs. sample node 1 delay c.d.f
sample
Complement
of
the
Cumulative
Density
Function
Delay (ms)
Estimated vs. sample node 2 delay c.d.f
sample
Complement
of
the
Cumulative
Density
Function
Delay (ms)
Estimated vs. sample node 4 delay c.d.f
sample
Complement
of
the
Cumulative
Density
Function
Delay (ms)
Estimated vs. sample node 7 delay c.d.f
sample
Complement
of
the
Cumulative
Density
Function
Delay (ms)
Estimated vs. sample node 10 delay c.d.f
sample
Complement
of
the
Cumulative
Density
Function
Delay (ms)
Estimated vs. sample node 12 delay c.d.f
sample
estimated
Figure
14: Sample vs. Estimated Delay c.d.f. for selected links.
Complement
of
the
Cumulative
Density
Function
Delay (ms)
Estimated vs. sample node 1 cumulative delay c.d.f
sample
Complement
of
the
Cumulative
Density
Function
Delay (ms)
Estimated vs. sample node 2 cumulative delay c.d.f
sample
Complement
of
the
Cumulative
Density
Function
Delay (ms)
Estimated vs. sample node 3 cumulative delay c.d.f
sample
Complement
of
the
Cumulative
Density
Function
Delay (ms)
Estimated vs. sample node 6 cumulative delay c.d.f
sample
Complement
of
the
Cumulative
Density
Function
Delay (ms)
Estimated vs. sample node 7 cumulative delay c.d.f
sample
estimated
Figure
15: Sample vs. Estimated node k cumulative delay c.d.f.
c.d.f.
Largest
Absolute
Vertical
Deviation
n. of probes
Maximum
Median
Minimum
(a)1e-021e+00
c.d.f.
Largest
Absolute
Vertical
Deviation
Link
Maximum
Median
Minimum
(b)
Figure
8. ACCURACY OF THE ESTIMATED
DISTRIBUTION. LARGEST VERTICAL ABSOLUTE DEVIATION BETWEEN ESTIMATED
AND THEORETICAL C.D.F. Minimum, median and the maximum largest absolute deviation in 100
simulations computed over all links as function of n (a) and link by link for
pendence, is still accurate. This can be explained by observing that (8) is equivalent to (4) which
consists of a convolution between A f(k) and fi k ; we expect the correlation between the delay accrued
by a probe in reaching node f(k) and the minimum delay accrued from node f(k) to reach
any receiver be rather small, especially as the tree size grows, as these delays span the entire multicast
tree.
Finally in Figure 16 we plotted the minimum, median and maximum largest deviation between
inferred and theoretical c.d.f. over 100 simulations computed over all links as function of n (left)
and link by link as for (right). Due to spatial correlation, the largest deviation level
off after the first 2000 probes with the median that stabilize at 5%. The accuracy again exhibits a
negative trend as we descend the tree.
6 Conclusions and Future Work
In this paper, we introduced the use of end-to-end multicast measurements to infer network internal
delay in a logical multicast tree. Under the assumption of delay independence, we derived an
algorithm to estimate the per link discrete delay distributions and utilization from the measured
end-to-end delay distributions. We investigated the statistical properties of the estimator, and show
it to be strongly consistent and asymptotically normal.
We evaluated our estimator through simulation. Within model simulation we verified the accuracy
and convergence of the inferred to the actual values as predicted by our analysis. In real
traffic simulations, we found rapid convergence, although some persistent difference to the actual
distributions because of spatial correlation.
We are extending our delay distribution analysis in several directions. First we plan to do more
extensive simulations, exploring larger topologies, different node behavior, background traffic and
probe characteristics. Moreover, we are exploring how probe delay is representative of the delay
suffered by other applications and protocols, for examples TCP.
Second, we are analyzing the effect of spatial correlation among delays and we are planning
to extend the model by directly taking into account the presence of correlation. Moreover, we
are studying the effect of the choice of the bin size on the accuracy of the results. To deal with
continuously distributed delay, we derived a continuous version of the inference algorithm we are
currently investigating.
Finally, we believe that our inference technique can shed light on the behavior and dynamics
of per link delay and so allow us to develop accurate link delay models. This will be also object of
future works.
We feel that multicast based delay inference is an effective approach to perform delay mea-
surements. The techniques developed are based on rigorous statistical analysis and, as our results
show, yield representative delay estimates for all traffic which receive the same per node behavior
of multicast probes. The approach does not depend on cooperation from networks elements and
because of bandwidth efficiency of multicast traffic is well suited to cope with the growing size of
today networks.
--R
"The Laplace Transform"
"Characterizing End-to-End Packet Delay and Loss in the Internet."
"The case for FEC-based error control for packet audio in the Internet"
"Multicast-Based Inference of Network Internal Loss Characteristics"
"Multicast-Based Inference of Network Internal Loss Characteristics: Accuracy of Packet Estimation"
"Inferring Link-Level Performance from End- to-End Measurements"
"Loss-Based Inference of Multicast Network Topology"
"Measurements Considerations for Assessing Unidirectional Latencies"
"Measuring Bottleneck Link Speed in Packet-Switched Networks,"
"Multicast Inference of Packet Delay Variance at Interior Networks Links"
"Probabilistic Inference Methods for Multicast Network Topology"
Felix: Independent Monitoring for Network Survivability.
"Random Early Detection Gateways for Congestion Avoidance,"
IPMA: Internet Performance Measurement and Analysis.
IP Performance Metrics Working Group.
"Creating a Scalable Architecture for Internet Mea- surement,"
"Diagnosing Internet Congestion with a Transport Layer Performance Tool,"
"Network Time Protocol (Version 3): Specification, Implementation and Analysis"
"Estimation and Removal of Clock Skew from Network Delay Measurements"
"Correlation of Packet Delay and Loss in the Internet"
"On the Dynamics and Significance of Low Frequency Components of Internet Load"
"End-to-End Routing Behavior in the Internet,"
"End-to-End Internet Packet Dynamics,"
"Measurements and Analysis of End-to-End Internet Dynamics,"
"Automated Packet Trace Analysis of TCP Implementations,"
"On calibrating measurements of Packet Transit Times"
"Inference of Multicast Routing Tree Topologies and Bottleneck Bandwidths using End-to-end Measurements"
"Experimental assessment of end-to-end behavior on Internet"
"Study of network dynamics"
"Theory of Statistics"
--TR
Measuring bottleneck link speed in packet-switched networks
End-to-end routing behavior in the Internet
End-to-end Internet packet dynamics
Automated packet trace analysis of TCP implementations
On calibrating measurements of packet transit times
Using pathchar to estimate Internet link characteristics
Network Delay Tomography from End-to-End Unicast Measurements
Multicast-Based Inference of Network-Internal Delay Distributions TITLE2:
--CTR
Fabio Ricciato , Francesco Vacirca , Martin Karner, Bottleneck detection in UMTS via TCP passive monitoring: a real case, Proceedings of the 2005 ACM conference on Emerging network experiment and technology, October 24-27, 2005, Toulouse, France
Earl Lawrence , George Michailidis , Vijay N. Nair, Local area network analysis using end-to-end delay tomography, ACM SIGMETRICS Performance Evaluation Review, v.33 n.3, December 2005
N. G. Duffield , V. Arya , R. Bellino , T. Friedman , J. Horowitz , D. Towsley , T. Turletti, Network tomography from aggregate loss reports, Performance Evaluation, v.62 n.1-4, p.147-163, October 2005
Zhen Liu , Laura Wynter , Cathy H. Xia , Fan Zhang, Parameter inference of queueing models for IT systems using end-to-end measurements, Performance Evaluation, v.63 n.1, p.36-60, January 2006
Omer Gurewitz , Israel Cidon , Moshe Sidi, One-way delay estimation using network-wide measurements, IEEE/ACM Transactions on Networking (TON), v.14 n.SI, p.2710-2724, June 2006
N. G. Duffield , Francesco Lo Presti, Network tomography from measured end-to-end delay covariance, IEEE/ACM Transactions on Networking (TON), v.12 n.6, p.978-992, December 2004
Nick Duffield , Francesco Lo Presti , Vern Paxson , Don Towsley, Network loss tomography using striped unicast probes, IEEE/ACM Transactions on Networking (TON), v.14 n.4, p.697-710, August 2006
Azer Bestavros , John W. Byers , Khaled A. Harfoush, Inference and Labeling of Metric-Induced Network Topologies, IEEE Transactions on Parallel and Distributed Systems, v.16 n.11, p.1053-1065, November 2005 | network tomography;queueing delay;estimation theory;multicast tree;end-to-end measurements |
611430 | A data and task parallel image processing environment. | The paper presents a data and task parallel low-level image processing environment for distributed memory systems. Image processing operators are parallelized by data decomposition using algorithmic skeletons. Image processing applications are parallelized by task decomposition, based on the image application task graph. In this way, an image processing application can be parallelized both by data and task decomposition, and thus better speed-ups can be obtained. We validate our method on the multi-baseline stereo vision application. | Introduction
Image processing is widely used in many application areas including the lm industry, medical
imaging, industrial manufacturing, weather forecasting etc. In some of these areas the
size of the images is very large yet the processing time has to be very small and sometimes
real-time processing is required. Therefore, during the last decade there has been an increasing
interest in the developing and the use of parallel algorithms in image processing. Many
algorithms have been developed for parallelizing dierent image operators on dierent parallel
architectures. Most of these parallel image processing algorithms are either architecture
dependent, or specically developed for dierent applications and hard to implement for a
typical image processing user without enough knowledge of parallel computing.
In this paper we present an approach of adding data and task parallelism to an image
processing library using algorithmic skeletons [3, 4, 5] and the Image Application Task Graph
(IATG). Skeletons are algorithmic abstractions common to a series of applications, which
can be implemented in parallel. Skeletons are embedded in a sequential host language, thus
being the only source of parallelism in a program. Using skeletons we create a data parallel
image processing framework which is very easy to use for a typical image processing user.
It is already known that exploiting both task and data parallelism in a program to
solve very large computational problems yields better speedups compared to either pure
task parallelism or pure data parallelism [7, 8]. The main reason is that both data and task
parallelism are relatively limited, and therefore using only one of them limits the achievable
performance. Thus, exploiting mixed task and data parallelism has emerged as a natural
solution. For many applications from the eld of signal and image processing, data set
sizes are limited by physical constraints and cannot be easily increased. In such cases the
amount of available data parallelism is limited. For example, in the multi-baseline stereo
application described in Section 5, the size of an image is determined by the circuitry of the
video cameras and the throughput of the camera interface. Increasing the image size means
buying new cameras and building a faster interface, which may not be feasible. Since the
data parallelism is limited, additional parallelism may come from tasking. By coding the
image processing application using skeletons and having the IATG we obtain a both data
and task parallel environment.
The paper is organized as follows. Section 2 brie
y presents a description of algorithmic
skeletons and a survey of related work. Section 3 presents a classication of low-level image
operators and skeletons for parallel low-level image processing on a distributed memory
system. Section 4 presents some related work and describes the Image Application Task
Graph used in the task parallel framework. The multi-baseline stereo vision application
together with its data parallel code using skeletons versus sequential code and the speedup
results for the data parallel approach versus the data and task parallel approach is presented
in Section 5. Finally, concluding remarks are made in Section 6.
Skeletons and related work
Skeletons are algorithmic abstractions which encapsulate dierent forms of parallelism, common
to a series of applications. The aim is to obtain environments or languages that allow
easy parallel programming, in which the user does not have to handle with problems as com-
munication, synchronization, deadlocks or non-deterministic program runs. Usually, they are
embedded in a sequential host language and they are used to code and hide the parallelism
from the application user.
The concept of algorithmic skeletons is not new and a lot of research is done to demonstrate
their usefulness in parallel programming. Most skeletons are polymorphic higher-order
functions, and can be dened in functional languages in a straightforward way. This is the
reason why most skeletons are build upon a functional language [3, 4]. Work has also been
done in using skeletons in image processing. In [5] Serot et al. presents a parallel image
processing environment using skeletons on top of CAML functional language.
In this paper we develop algorithmic skeletons to create a parallel image processing
environment ready to use for easy implementation/development of parallel image processing
applications. The dierence from the previous approach [5] is that we allow the application
to be implemented in a C programming environment and that we allow the possibility to
use/implement dierent scheduling algorithms for obtaining the minimum execution time.
3 Skeletons for low-level image processing
3.1 A classication of low-level image operators
Low-level image processing operators use the values of the image pixels to modify the image
in some way. They can be divided into point operators, neighborhood operators and global
operators [1, 2]. Below, we disscus in detail about all these three types of operators.
1. Point operators
Image point operators are the most powerful functions in image processing. A large
group of operators falls in this category. Their main characteristics is that a pixel from
the output image depends only on the corresponding pixel from the input image. Point
operators are used to copy an image from one memory location to another, in arithmetic
and logical operations, table lookup, image compositing. We will discuss in detail
arithmetic and logic operators, classifying them from the point of view of the number of
images involved, this being an important issue in developing skeletons for them.
Arithmetic and logic operations
Image ALU operations are fundamental operations needed in almost any imaging product
for a variety of purposes. We refer to operations between an image and a constant as
monadic operations, operations between two images as dyadic operations and operations
involving three images as triadic operations.
{ Monadic image operations
Monadic image operators are ALU operators between an image and a constant. These
operations are shown in Table 1 - s(x; y) and d(x; y) are the source and destination
pixel values at location (x; y), and K is the constant.
Table
1 Monadic image operations
Function Operation
Add constant d(x;
Subtract constant d(x;
Multiply constant d(x;
Divide by constant d(x;
Or constant d(x;
And constant d(x;
Xor constant d(x;
Absolute value d(x;
Monadic operations are useful in many situations. For instance, they can be used to
add or subtract a bias value to make a picture brighter or darker.
{ Dyadic image operators
Dyadic image operators are arithmetic and logical functions between the pixels of
two source images producing a destination image. These functions are shown below
in
Table
are the two source images that are used to create
the destination image d(x; y).
Table
operations
Function Operation
Add
Subtract
Multiply
Divide
Min
Or
And
Dyadic operators have many uses in image processing. For example, the subtraction
of one image from another is useful for studying the
ow of blood in digital
subtraction angiography or motion compensation in video coding. Addition of images
is a useful step in many complex imaging algorithms like development of image
restoration algorithms for moddeling additive noise, and special eects, such as image
morphing, in motion pictures.
{ Triadic image operators
Triadic operators use three input images for the computation of an output image.
An example of such an operation is alpha blending. Image compositing is a useful
function for both graphics and computer imaging. In graphics, compositing is used
to combine several images into one. Typically, these images are rendered separately,
possibly using dierent rendering algorithms. For example, the images may be rendered
separately, possibly using dierent types of rendering hardware for dierent
algorithms. In image processing, compositing is needed for any product that needs
to merge multiple pictures into one nal image. All image editing programs, as well
as programs that combine synthetically generated images with scanned images, need
this function.
In computer imaging, the term alpha blend can be dened using two source images
and S2, an alpha image and a destination image D, see formula (1).
Another example of a triadic operator is the squared dierence between a reference
image and two shifted images, an operator used in the multi-baseline stereo vision
application, described in Section 5.
Table
3 Triadic image operations
Function Operation
Alpha blend d(x;
Squared di d(x;
2. Local neighborhood operators
Neighborhood operators (lters) create a destination pixel based on the criteria that
depend on the source pixel and the value of pixels in the \neighborhood" surrounding it.
Neighborhood lters are largely used in computer imaging. They are used for enhancing
and changing the appearance of images by sharpening, blurring, crispening the edges, and
noise removal. They are also useful in image processing applications as object recognition,
image restoration, and image data compression. We dene a lter as an operation that
changes pixels of the source image based on their values and those of their surrounding
pixels. We may have linear and nonlinear lters.
Linear ltering versus nonlinear ltering
Generally speaking, a lter in imaging refers to any process that produces a destination
image from a source image. A linear lter has the property that a weighted sum of the
source images produces a similarly weighted sum of the destination images.
In contrast to linear lters, nonlinear lters are somewhat more dicult to characterize.
This is because the output of the lter for a given input cannot be predicted by the
impulse response. Nonlinear lters behave dierently for dierent inputs.
Linear ltering using two-dimensional discrete convolution
In imaging, two-dimensional convolution is the most common way to implement a linear
lter. The operation is performed between a source image and a two-dimensional
convolution kernel to produce a destination image. The convolution kernel is typically
much smaller than the source image. Starting at the top of the image (the top left corner
which is also the origin of the image), the kernel is moved horizontally over the image,
one pixel at a time. Then it is moved down one row and moved horizontally again. This
process is continued until the kernel has traversed the entire image. For the destination
pixel at row m and column n, the kernel is centered at the same location in the source
image.
Mathematically, two-dimensional discrete convolution is dened as a double summation.
Given an MN image f(m; n) and KL convolution kernel h(k; l), we dene the origin
of each to be at the top left corner. We assume that f(m; n) is much larger than h(k; l).
Then, the result of convolving f(m; n) by h(k; l) is the image g(m; n) given by formula
In the above formula we assume that K;L are odd numbers and we extend the image
by (K 1)=2 lines in each vertical direction and by (L 1)=2 columns in each horizontal
direction. The sequential time complexity of this operation is O(MNKL). As it can
be observed, this is a time consuming operation, very well tted to the data parallel
approach.
3. Global operators
Global operators create a destination pixel based on the entire image information. A representative
example of an operator within this class is the Discrete Fourier Transform
(DFT). The Discrete Fourier Transform converts an input data set from the tempo-
ral/spatial domain to the frequency domain, and vice versa. It has a lot of applications
in image processing, being used for image enhancement, restoration, and compression.
In image processing the input is a set of pixels forming a two-dimensional function that
is already discrete. The formula for the output pixel X lm is the following:
x jk e 2i( jl
where j and k are column coordinates, 0 j N 1 and 0 k M 1.
We also include in the class of global operators, operators like the histogram transform,
which do not have an image as output, but another data structure.
3.2 Data parallelism of low-level image operators
From the operator description given in the previous section we conclude that point, neighborhood
and global image processing operators can be parallelized using the data parallel
paradigm with a host/node approach. A host processor is selected for splitting and distributing
the data to the other nodes. The host also processes a part of the image. Each
node processes its received part of the image and then the host gathers and assembles the
image back together. In Figures 1, 2 and 3 we present the data parallel paradigm with the
host/node approach for point, neighborhood and global operators. For global operators we
send the entire image to the corresponding nodes but each node will process only a certain
part of the image. In order to avoid extra inter-processor communication due to the border
information exchange for neighborhood operators we extend and partition the image as
showed in Figure 2. In this way, each node processor receives all the data needed for applying
the neighborhood operator.
Original Image Processed Image
Master 0
Master 0
node 1
node 2
node n-1
Master 0
Fig. 1. DCG skeleton for point operators
Original Image
Extended Image
Master
Master
node 1
node 2
node n-1
Master0Processed Image
Fig. 2. DCG skeleton for neighborhood operator
Original Image
Processed area at 0
Processed area at 1
Processed area at 2
Processed area at n-1
Processed Image
Master
Master (0) (0)
Fig. 3. DCG skeleton for global operators
Based on the above observations we identify a number of skeletons for parallel processing
of low-level image processing operators. They are named according to the type of the low-level
operator and the number of images involved in the operation. Headers of some skeletons
are shown below. All of them are based on a "Distribute Compute and Gather" (DCG) main
skeleton, previously known as the map skeleton [4], suitable for regular applications as the
low-level operators from image processing. The implementation of all the skeletons is based
on the ideea described in the above paragraph, see Figures 1, 2 and 3. Each skeleton can
run on a set of processors. From this set of processors a host processor is selected to split
and distribute the image(s) to the other nodes, each other node from the set receives a part
of the image(s) and the image operator which should be applied on it, then the computation
takes place and the result is sent back to the host processor. The skeletons are implemented
in C using MPI-Panda library [19, 20]. The implementation is transparent to the user.
void ImagePointDist_1IO(unsigned int n,char *name,void(*im_op)());
// DCG skeleton for monadic point operators - one Input/Output
void ImagePointDist_1IO_C(unsigned int n,char *name, void(*im_op)(),float ct);
// DCG skeleton for monadic point operators which need a constant value as pararameter
// one Input/Output
void ImagePointDist_1I_1O(unsigned int n,char *name1,char *name2,void(*im_op)());
// DCG skeleton for monadic/dyadic point operators - one Input and one Output
void ImagePointDist_1IO_1I(unsigned int n,char *name1,char *name2,void(*im_op)());
// DCG skeleton for monadic/dyadic point operators - one Input/Output and one Input.
void ImagePointDist_2I_1O(unsigned int n,char *name1,char *name2,char *name3,void(*im_op)());
// DCG skeleton for dyadic/triadic point operators - 2 Inputs and one Output
void ImagePointDist_2I_2O(unsigned int n,char *name1,char *name2,char *name3,char *name4,
Inputs and 2 Outputs
void ImagePointDist_3I_1O(unsigned int n,char *name1,char *name2,char *name3,char *name4,
// DCG skeleton for triadic point operators - 3 Inputs and one Output
void ImageWindowDist_1IO(unsigned int n,char *name,Window *win,void(*im_op)());
// DCG skeleton for neighborhood operators - one Input/Output
void ImageWindowDist_1I_1O(unsigned int n,char *name1,char *name2,Window *win,void(*im_op)());
// DCG skeleton for neighborhood operators - one Input and one Output
void ImageGlobalDist_1IO(unsigned int n,char *name,void(*im_op)());
// DCG skeleton for global operators - one Input/Output
We develop several types of skeletons, which depend on the type of the low-level operator
(point, neighborhood, global) and the number of input/output images. With each skeleton
we associate a parameter which represents the task number corresponding to that skeleton.
This is used by the task parallel framework. Depending on the skeleton type, one or more
identiers of the images are given as parameters. The last argument is the point operator for
processing the image(s). So, each skeleton is used for a number of low-level image processing
operators which perform in a similar way (for instance all dyadic point operators take two
input images, combine and process them depending on the operator type and then produce
an output image). Depending on the operator type and the skeleton type, there might exist
additional parameters necessary for the image operator. For point operators we assigned the
ImagePointDist skeletons, for neighborhood operators we assigned the ImageWindowDist
skeletons, and for global operators we assigned the ImageGlobalDist skeletons. Some of the
skeletons modify the input image (ImagePointDist 1IO, ImageWindowDist 1IO, ImageGlob-
alDist 1IO, so 1IO stands for 1 Input/Output image), other skeletons take a number of input
images and create a new output image, for example the ImagePointDist 2I 1O skeleton for
point operators takes 2 input images and creates a new output image. This skeleton is necessary
for dyadic point operators (like addition, subtraction, etc., see Table 2) which create
a new image by processing two input images. Similarly, the skeleton ImagePointDist 3I 1O
for point operators takes 3 input images and creates a new output image. An example of a
low-level image processing operator suitable for this type of skeleton is the squared dierence
between one reference image and two disparity images, operator used in the multi-baseline
stereo vision application, see Table 3 and Section 5. Similar skeletons exist also for local
neighborhood and global operators. ImagePointDist 1IO C is a skeleton for monadic point
operators which need a constant value as parameter, for processing the input image, see
Table
1.
Below we present an example of using the skeletons to code a very simple image processing
application in a data-parallel way. It is an image processing application of edge detection
using Laplace and Sobel operators. First we read the input image and we create the two
output images and a 3 3 window, and then we apply the Laplace and Sobel operators
on the num nodes number of processors. num nodes is the number of nodes on which
the application is run and is detected on the rst line of the partial code showed below.
image in is the name of the input image given as input parameter to both skeletons and
image l, image s are the output parameters (images) for each skeleton. We have used a
ImageWindowDist 1I 1O skeleton to perform both operators. The last two parameters are
the window used (which contains information about the size and the data of the window)
and the image operator that is applied via the skeleton.
4 The task parallel framework
Recently, it has been shown that exploiting both task and data parallelism in a program
to solve very large computational problems yields better speedups compared to either pure
data parallelism or either pure task parallelism [7, 8]. The main reason is that both task
and data parallelism are relatively limited, and therefore using only one of them bounds the
achievable performance. Thus, exploiting mixed task and data parallelism has emerged as
a natural solution. We show that applying both data and task parallelism can improve the
speedup at the application level.
There have been considerable eort in adding task-parallel support to data-parallel lan-
guages, as in Fx [10], Fortran M [11] or Paradigm HPF [7], or adding data-parallel support
to task-parallel languages such as in Orca [12]. In order to fully exploit the potential advantage
of the mixed task and data parallelism, ecient support for task and data parallelism
is a critical issue. This can be done not only at the compiler level, but also at the application
level and applications from the image processing eld are very suitable for this technique.
Mixed task and data parallel techniques use a directed acyclic graph, in the literature
also called a Macro Data
ow Graph (MDG) [7], in which data parallel tasks (in our case the
image processing operators) are the nodes and the precedence relationships are the edges.
For the purpose of our work we change the name of this graph to the Image Application
Task Graph (IATG).
4.1 The Image Application Task Graph model
A task parallel program can be modeled by a Macro Data
ow communication Graph [7],
which is a directed acyclic graph c), where:
{ V is the nite set of nodes which represents tasks (image processing operators)
{ E is the set of directed edges which represent precedence constraints between tasks:
{ w is the weight function which gives the weight (processing time) of each
node (task). Task weights are positive integers.
{ c is the communication function c which gives the weight (communication
time) of each edge. Communication weights are positive integers.
An Image processing Application Task Graph (IATG) is, in fact, an MDG in which
each node stands for an image processing operator and each edge stands for a precedence
constraint between two adjacent operators. In this case, a node represents a larger entity
that in the MDG where a node can be any simple instruction from the program.
Some important properties of the IATG are:
{ It is a weighted directed acyclic graph.
{ Nodes represent image processing operators and edges represent precedence constraints
between them.
{ The are two distinguished nodes: START precedes all other nodes and STOP succeeds
all other nodes.
We dene a well balanced IATG as an application task graph which has the same type
of tasks (image operators) on each level. An example is the IATG of the multi-baseline
stereo vision application, described in Section 5 Figure 7, which on the rst level has the
squared dierence operator applied to 3 images for each task and on the second level the
error operator is executed by all the tasks. Moreover, the graph edges form a regular pattern.
The weights of nodes and edges in the IATG are based on the concepts of processing
and communication costs. Processing costs account for the computation and communication
costs of data parallel tasks - image processing operators corresponding to nodes, and depend
on the number of processors allocated to the node. Communication costs account for the
costs of data communication between nodes.
4.2 Processing cost model
A node in the IATG represents a processing task (an image processing operator applied via
a DCG skeleton, as described in Section 3.2) that runs non-preemptively on any number of
processors. Each task i is assumed to have a computation cost, denoted T exec (i; p i ), which
is a function of the number of processors. The computation cost function of the task can be
obtained either by estimation or by proling.
For cost estimation we use Amdahl's law. According to it, the execution time of the task
is:
where i is the task number, p i is the number of processors on which task i is executed,
is the task's execution time on a single processor and is the fraction of the task that
executes serially.
If we use proling, the task's execution costs are either tted to a function similar to
the one described above (in the case that data is not available for all processors), or the
proled values can be used directly through a table. The values are simple to determine,
we measure the execution times of the basic image processing operators implemented in the
image processing library and we tabulate their values.
4.3 Communication cost model
Data communication (redistribution) is essential for implementing an execution scheme
which uses both data and task parallelism. Individual tasks are executed in a data parallel
fashion on subsets of processors and the data dependences between tasks may necessitate
not only changing the set of processors but also the distribution scheme. Figure 4 illustrates
a classical approach of redistribution between a pair of tasks. Task TaskA is executed using
seven processors and reads from data D. Task TaskB is executed using four processors and
reads from the same data D. This necessitates the redistribution of the data D from the
seven processors executing task TaskA to the four processors executing task TaskB. In
addition to changing the set of processors we could also change the distribution scheme of
the data D. For instance, if D is a two dimensional data then TaskA might use a block
distribution for D, whereas TaskB might use a row-stripe distribution.
Processors executing TaskB
Processors executing TaskA
Redistribution
Fig. 4. Data redistribution between
two tasks
Processors executing TaskB
Processors executing TaskA
master A
master B
Fig. 5. Image communication between
two host processors
We reduce the complexity of the problem rst by allowing only one type of distribution
scheme (row-stripe) and second by sending images only between two processors (the selected
host processors from the two sets of processors), as shown in Figure 5.
An edge in the IATG corresponds to a precedence relationship and has associated a communication
cost, denoted through Tcomm (i; which depends on the network characteristics
(latency, bandwidth) and the amount of data to be transferred. It should be emphasized
that there are two types of communication times. First, we have internal communication
time which represents the time for internal transfer of data between the processors allocated
to a task. This quantity is part of the term of the execution time associated to a node of
the graph. Secondly, we have external communication time which is the time of transferring
data, i.e. images, between two processors. These two processors represent the host processors
for the two associated image processing tasks (corresponding to the two adjacent graph
nodes). This quantity is actually the communication cost of an edge of the graph.
In this case we can also use either cost estimation or proling to determine the communication
time. In state-of-the-art of distributed memory systems the time to send a message
containing L units of data from a processor to another processor can be modeled as:
are the startup and per byte cost for point-to-point communication and L is
the length of the message, in bytes.
We run our experiments on a distributed memory system which consists of a cluster of
Pentium Pro/200Mhz PCs with 64Mb RAM running Linux, and connected through Myrinet
in a 3D-mesh topology, with dimension order routing [16]. Figure 6 shows the performance
of point-to-point communication operations and the predicted communication time. The
reported time is the minimum time obtained over 20 executions of the same code. It is
reasonable to select the minimum value because of the possible interference caused by other
users' trac in the network. From these measurements we perform a linear tting and we
extract the communication parameters t s and t b . In Figure 6 we see that the predicted
communication time, based on the above formula, approximates very good the measured
communication time.
measured
predicted
time(microseconds)
message size (2
Fig. 6. Performance of point-to-point communication on DAS
4.4 IATG cost properties
A task with no input edges is called an entry task and a task with no output edges is
called an exit task. The length of a path from the graph is the sum of the computation and
communication costs of all nodes and edges belonging to the path. We dene the Critical
Path [7] (CP) as the longest path in the graph. If we have a graph with n nodes, where n
is the last node of the graph and t i represents the nish time of node i, T exec (i; p i ) is the
execution time of task i on a set of p i nodes then the critical path is given by the formulas
and (7), where PRED i is the set of immediate predecessor nodes of node i.
We dene the Average Area [7] (A) of an IATG with n nodes (tasks) for a P processor
system as in formula (8), where p i is the number of processors allocated to task T i .
A =P
The critical path represents the longest path in the IATG and the average area provides
a measure of the processor-time area required by the IATG. Based on these two formulas,
processors are allocated to tasks according to the results obtained by solving the following
minimization problem:
subject to
After solving the allocation problem, a scheduler is needed to schedule the tasks to
obtain a minimum execution time. The classical approach is the well-known list scheduling
paradigm [13] introduced by Graham, which schedules one processor tasks (tasks running
only on one processor). Scheduling is known to be NP-complete for one processor tasks. Since
then several other list scheduling algorithms were proposed, and the scheduling problem was
also extended to multiple processor tasks (tasks that run non-preemptively on any number
of processors) [7]. Therefore, multiple processor task scheduling is also NP-complete and
heuristics are used.
The intuition behind minimizing in equation (9) is that represents a theoretical lower
bound on the time required to execute the image processing application corresponding to
the IATG. The execution time of the application can neither be smaller than the critical
path of the graph nor be less than the average area of the graph.
As the TSAS's convex programming algorithm [7] for determining the number of processors
for each task was not available, we have used in the experimental part of Section 5 the
nonlinear solver based on SNOPT [17] available on the internet [18] for solving the previous
problem. For solving the scheduling problem, the proposed scheduling algorithm
presented in [7] is used. Another possibility is to use scheduling algorithms developed for
data and task parallel graphs [8, 9].
5 Experiments
To evaluate the benets of the propose data parallel framework based on skeletons and also
of the task parallel framework based on the IATG, we rst compare the code of the multi-baseline
stereo vision algorithm with and without using skeletons (with and without data
parallelism). Then we compare the speed-ups obtained by applying only data parallelism to
the application, with the speed-ups obtained with both data and task parallelism.
The multi-baseline stereo vision application uses an algorithm developed by Okutomi
and Kanade [6] and described by Webb and al. [14, 15], that gives greater accuracy in depth
through the use of more than two cameras. Input consists of three n n images acquired
from three horizontally aligned, equally spaced cameras. One image is the reference image,
the other two are named match images. For each of 16 disparities, 15, the rst
match image is shifted by d pixels, the second image is shifted by 2d pixels. A dierence
image is formed by computing the sum of squared dierences between the corresponding
pixels of the reference image and the shifted match images. Next, an error image is formed
by replacing each pixel in the dierence image with the sum of the pixels in a surrounding
13 13 window. A disparity image is then formed by nding, for each pixel, the disparity
that minimizes error. Finally, the depth of each pixel is displayed as a simple function of its
disparity. Figure 7 presents the IATG of this application.
It can be observed that the computation of the dierence images requires point op-
erators, while the computation of the error images requires neighborhood operators. The
computation of the disparity image requires also a point operator.
Input: ref, m1, m2 (the reference and the two match images)
for d=0,15
Task T1,d: m1 shifted by d pixels
Task T2,d: m2 shifted by 2*d pixels
Task T5: Disparity image = d which minimizes the err image
Pseudocode of the multi-baseline stereo vision application
broadcast
diff0
diff1
diff2
diff15
err0
err1
err2
reduce
ref
disparity image13171933
Fig. 7. Multi-baseline stereo vision IATG
Below we present the sequential code of the application versus the data parallel code
of the application. Coding the application by just combining a number of skeletons doesn't
require much eort from the image processing user, yet it parallelizes the application. The
data and task parallel code is slightly more dicult and we do not present it here.
{
Sequential code
{
{
{
ImagePointDist_3I_1O(d,"im","ref","m1",
DT-PIPE code based on skeletons
Besides creating the images on the host processor, the code is nearly the same, only the
function headers dier. The skeleton have as parameters the name of the images, the window
and the image operator, while in the sequential version operator headers have as parameters
the images and the window. The skeletons are implemented in C using MPI [19].
The results of the data parallel approach are compared with the results obtained using
data and task parallelism on a distributed memory system which consists of a cluster of
Pentium Pro/200Mhz PCs with 64Mb RAM running Linux [16], and connected through
Myrinet in a 3D-mesh topology, with dimension order routing. In the task parallel framework
we use a special mechanism to register the images on the processors where they are rst
created. Moreover, each skeleton has associated the task number to which it corresponds.
We use 1, 2, 4, 8, 16, 32 and 64 processing nodes in the pool. Three articial reference
images of sizes 256 256, 512 512 and 1024 1024 are used. The code is written using C
and MPI message passing library. The multi-baseline stereo vision algorithm is an example
of a regular well balanced application in which task parallelism can be applied without the
need of an allocator of scheduler. Just for comparison reasons, we have used the algorithm
described in [7] and we have obtained identical results (we divide the number of nodes to the
number of tasks and we obtain the number of the nodes on which each task should run). In
Figure
8 we show the speed-ups obtained for the data parallel approach for dierent image
sizes.
Figure
9 shows the speed-up of the same application using the data and task parallel
approach, also for dierent image sizes. We can observe that the speed-ups become quickly
saturated for the data-parallel approach while the speed-ups for the data and task parallel
approach perform very good. In fact, we have pure task parallelism up to 16 processors
and data and task parallelism from 16 on. So, the pure task parallel speed-ups will become
attened from 16 processors on because at this type of application is better to rst apply
task parallelism and then to add data parallelism. Using both data and task parallelism is
more ecient than using only data parallelism.
Processors
Fig. 8. Speed-up for the data-parallel approac
Processors81624324048
Fig. 9. Speed-up for the data and task parallel
approach
6 Conclusions
We have presented an environment for data and task parallel image processing. The data
parallel framework, based on algorithmic skeletons, is easy to use for any image processing
user. The task parallel environment is based on the Image Application Task Graph and
computing the IATG communication and processing costs. If the IATG is a regular well
balanced graph task parallelism can be applied without the need of these computations.
We showed an example of using skeletons and the task parallel framework for the multi-baseline
stereo vision application. The multi-baseline stereo vision is an example of an image
processing application which contain parallel tasks, each of the tasks being a very simple
image point or neighborhood operator. Using both data and task parallelism is more ecient
than using only data parallelism. Our code for the data and task parallel environment,
using C and the MPI-Panda library [19, 20] can be easily ported to other parallel
machines.
--R
Parallel Algorithms for Digital Image Processing
Parallel Programming
"Algorithmic skeletons: structured management of parallel computations"
Skeletons for structured parallel composition
A multiple-baseline stereo
A framework for exploiting task and data parallelism on distributed memory multicomputers
Optimal use of mixed task and data parallelism for pipelined compu- tations
CPR: Mixed Task and Data Parallel Scheduling for Distributed Systems
A new model for integrated nested task and data parallel program- ming
Fortran M: A language for modular parallel programming
A task and data parallel programming language based on shared objects
Bounds on multiprocessing timing anomalies
Implementation and Performance of Fast Parallel Multi-Baseline Stereo Vision
The Distributed ASCI supercomputer (DAS) site
User's guide for snopt 5.3: A fortran package for large-scale nonlinear programming
Lucent Technologies AMPL site
"MPI - The Complete Reference, vol.1, The MPI Core"
Experience with a portability layer for implementing parallel programming systems
--TR
Algorithmic skeletons
Parallel algorithms
Fortran M
A new model for integrated nested task and data parallel programming
A Framework for Exploiting Task and Data Parallelism on Distributed Memory Multicomputers
A task- and data-parallel programming language based on shared objects
Optimal use of mixed task and data parallelism for pipelined computations
MPI-The Complete Reference
A Multiple-Baseline Stereo
CPR
--CTR
Development platform for parallel image processing, Proceedings of the 6th WSEAS International Conference on Signal, Speech and Image Processing, p.31-36, September 22-24, 2006, Lisbon, Portugal
Antonio Plaza , David Valencia , Javier Plaza , Pablo Martinez, Commodity cluster-based parallel processing of hyperspectral imagery, Journal of Parallel and Distributed Computing, v.66 n.3, p.345-358, March 2006
Frank J. Seinstra , Dennis Koelma , Andrew D. Bagdanov, Finite State Machine-Based Optimization of Data Parallel Regular Domain Problems Applied in Low-Level Image Processing, IEEE Transactions on Parallel and Distributed Systems, v.15 n.10, p.865-877, October 2004 | data parallelism;skeletons;image processing;task parallelism |
611436 | Approaches to zerotree image and video coding on MIMD architectures. | The wavelet transform is more and more widely used in image and video compression. One of the best known algorithms in image compression is the set partitioning in hierarchical trees algorithm which involves the wavelet transform. As today the parallelisation of the wavelet transform is sufficiently investigated, this work deals with the parallelisation of the compression algorithm itself as a next step. Two competitive approaches are presented: one is a direct parallelisation and the other uses an altered algorithm which suits better to the parallel architecture. | Introduction
Image and video coding methods that use wavelet transforms have been successful in providing
high rates of compression while maintaining good image quality and have generated much interest
in the scientic community as competitors to DCT based compression schemes in the context of
the MPEG-4 and JPEG2000 standardisation processes.
Most video compression algorithms rely on 2-D based schemes employing motion compensation
techniques. On the other hand, rate-distortion e-cient 3-D algorithms exist which are able to
capture temporal redundancies in a more natural way [10, 5, 4, 16, 1]. Unfortunately, these
3-D algorithms often show prohibitive computational and memory demands (especially for real-time
applications). Therefore, MIMD architectures seem to be an interesting choice for such an
algorithm.
A signicant amount of work has already been done on parallel wavelet transform algorithms
for all sorts of high performance computers. We nd various kinds of suggestions for 1-D, 2-D
and 3-D algorithms on MIMD computers for decomposition only [8, 19, 15, 13, 3, 6] as well as in
connection with image compression schemes [9, 2].
Zero-tree based coding algorithms e-ciently encode approximations of wavelet coe-cients by
encoding collections of neglectable (insignicant) coe-cients through single symbols. These collections
are called zero-trees because of the tree-like arrangement of wavelet coe-cients which exploits
the wavelet transform's self-similarity property.
A parallelisation of the EZW algorithm { an important zero-tree coding scheme { is presented
in [2] where two approaches are proposed: One is a straight-forward parallelisation which performs
the EZW algorithm locally on each processing element (PE) for distinct blocks. This makes the
resulting bit-stream (BS) incompatible with the sequential algorithm. The other approach reserves
one PE for the collection of the symbols that have to be encoded. This PE performs a reordering
of the symbols before it encodes them. This approach is similar to the approaches presented in
this work and (as far as the author understands) is compatible with the sequential EZW.
An improvement of the EZW is the SPIHT algorithm (Set Partitioning In Hierarchical Trees
[14]). It is a well known, fast and e-cient algorithm which can also be used as 3D-variant in video
compression [5]. However, this algorithm makes use of lists of coe-cients, which makes it hard
to parallelise. This means that although consecutive list entries initially point to neighbouring
wavelet transform coe-cients, the algorithm jumbles the entries after a while. Thus, there is no
easy data driven parallelisation.
An approach to packetise the SPIHT algorithm is presented in [17]. This is similar to the
rst approach in [2]. The output of several executions of SPIHT on spacial blocks is multiplexed
based on a rate allocation technique that approximates distortion reduction by fast SPIHT-specic
statistics. However, the resulting bit-stream (BS) is not compatible with sequential SPIHT.
This work concentrates on the parallelisation of the SPIHT algorithm and presents two competitive
approaches. The rst is a direct parallelisation, i.e. the sequential algorithm is mapped
to the parallel architecture without alteration of the sequential algorithm. Several \tricks" have
to be found to overcome involved parallelisation di-culties. The second approach introduces a
variant of SPIHT that involves a more spacially oriented coe-cient scan order and, thus, avoids
the problems of the rst approach. Similar algorithms are proposed in [18, 11, 12]. Although a
breadth-rst scan order through the trees of coe-cients [18] is PSNR-optimal, a depth-rst scan
order [11, 12] is preferred here because of better spacial separability.
1.1 Parallel Wavelet Transform
The fast wavelet transform can be e-ciently implemented by a pair of appropriately designed
Quadrature Mirror Filters (QMF) consisting of a low-pass and a high-pass lter which decompose
the original data set into two frequency-bands. These sub-bands are down-sampled by 2 and the
same procedure is recursively applied to the coarse scale (low-pass ltered) sub-band. In the 2-D
case consecutive ltering of rows and columns produces four sub-bands (eight in the 3-D case).
Only the one which is ltered with the low-pass lter in each dimension is decomposed further.
This is called the pyramidal wavelet transform.
To perform the wavelet transform in parallel, the data has to be distributed among the PEs in
some way. In this work data is splitted into slices (in the time domain). The ltering is performed
in parallel on local data. Border data has to be exchanged before each decomposition step between
neighbouring PEs due to the lter length. After that, transformed data is found distributed as
shown in Figure 1.
In contrast to the parallelisation of the wavelet transform as presented in a previous paper[7],
the parallel wavelet transform used here dispenses with video data distribution as well as collection
of transformed data. Initial data distribution is not necessary because input is performed in parallel
(i.e. each PE reads its own part of the video data). Note that the speedups reported in this work
do not include I/O operations as I/O is not viewed as a part of the algorithm. Another advantage
of this is that we can drop the host-node paradigm because there is no extra single PE responsible
for data distribution. Likewise, the collection of transformed data is not necessary because data
is passed on to the coding part of the algorithm which is also performed in parallel. There is no
redistribution of data required as we will see in section 2.2 and 3.2.
(a) 2-D case (b) 3-D case
Figure
1: Distribution of coe-cients or list entries after parallel wavelet transform. Dierent
colours indicate dierent PEs
1.2 Zero-Trees
Zero-tree based algorithms arrange the coe-cients of a wavelet transform in a tree-like manner
(as in
Figure
2), i.e. each coe-cient has a certain number of child coe-cients in another sub-band
(mostly 4 in the 2-D, 8 in the 3-D case). We will use the following notations:
o(p) The direct ospring of a coe-cient p, i.e. all coe-cients whose parent coe-cient is p.
desc(p) All descendants of a coe-cient p. This includes o(p), o(o(p)) and so on.
parent(p) The parent coe-cient of p. p 2 o(parent(p)).
Furthermore, a zero-tree is a sub-tree which entirely consists of insignicant coe-cients. The
signicance of a coe-cient is relative to a threshold which plays an important role in the SPIHT
algorithm:
The statistical properties of transformed image or video data (self-similarity) ensures the existence
of many zero-trees. Sets of insignicant coe-cients can be encoded e-ciently with the help of zero-
trees. We will see that sometimes the root coe-cient of the subtree (or even its direct ospring)
does not have to be insignicant.
Zero-trees can be viewed as a collection of coe-cients with approximately equal spacial position.
While this fact implies that the coe-cient's signicances are statistically related, which is exploited
by the SPIHT algorithm, this also means that zero-trees are local objects, corresponding to the
data distribution produced by the parallel wavelet transform (see Figure 2). This can be exploited
by the parallelisation of the zero-tree algorithms (see Section 2.2 and 3.2).
Parallelisation without Algorithm Alteration
2.1 The SPIHT Algorithm
Although the SPIHT algorithm is su-ciently explained in the original paper[14], it is helpful in
this context to reformulate the algorithm.
Figure
2: After parallel decomposition, data is distributed in a way so that each zero-tree resides
on a single PE
threshold
ll LIS, LIP with approximation subband
set LSP empty
for each renement step
threshold threshold =2
process LIS
process LIP
process LSP
(a) Pseudo code LIS LIP LSP
Coefficients
process
process
process
BS
sig.,
sign ref.-bits
sig.,
sign
(b) Data
ow graph
Figure
3: The SPIHT algorithm
Signicance information is represented by three lists:
LIS List of insignicant set of pixels
An entry in this list can be of two types:
Type
Type
LIP List of insignicant pixels
LSP List of signicant pixels
The LIS basically contains all zero-tree roots. An entry of type A corresponds to an insignicant
sub-tree without its root. An entry of type B corresponds to an insignicant sub-tree without its
root and the root's direct ospring. The LIP contains all insignicant coe-cients that are not part
of any zero-tree in the LIS. The LSP contains all signicant coe-cients.
LIS LIP LSP BS
Initialisation Before reading separator After processing separator End
LIS LIP LSP BS LIS LIP LSP BS LIS LIP LSP BS
Figure
4: Functionality of separators. Four states of the three lists and the bit-stream while
processing the LIS.
The algorithm is shown at a coarse level in Figure 3. Initially, the threshold is greater than all
coe-cients. Thus, the LIS and the LIP are lled with the approximation sub-band's coe-cients,
and the LSP is empty. After that, each entry of each list has to be tested for a change of signicance,
and the result of the test has to be encoded as a bit in the bit-stream (BS). If, for instance, a type
A entry of the LIS turns out not to to be insignicant any more (to be precise: its descendants),
a zero bit has to be written into the bit-stream, and the entry has to be deleted from the LIS,
inserted as a type B entry at the end of the LIS and its direct ospring has to be inserted at the
end of the LIP. All entries inserted at the end of a list are also processed in the same renement
step until no more entries are left. Figure 3(b) shows a data
ow graph for a renement step.
The decoding process performs the same algorithm. It does not, however, evaluate the signi-
cance of the list entries, but simply reads this information from the bit-stream and approximates
the value of the corresponding coe-cient as good as it can.
2.2 SPIHT Parallelisation
When parallelising the SPIHT algorithm, we have to face the problem that it uses lists of coe-cient
positions and is, therefore, inherently sequential. The reason is that it is hard to perform general
list operations on distributed lists. Nevertheless, the set of distinct list operations involved in the
SPIHT algorithm is limited. This enables us to develop an e-cient way to manage distributed
lists of coe-cients.
2.2.1 Separators
The basic operations of the algorithm are: Moving an iterator all through a list, deleting elements
at iterator position and appending elements at the end of a list. The aim is to distribute the list so
that each PE-local entry corresponds to a local coe-cient where coe-cients are distributed among
the PEs as shown in Figure 1.
This is a simple task for initial distribution. However, as coe-cients are appended to the end
of lists, one has to provide a mechanism to indicate which parts of a list belong to which PE { or
from a PE's view: where a sequence of local coe-cients ends and parts of another PE's list should
be inserted. This work is done by separators (see Figure 4).
The idea is to insert a separator at the end of each part of the list which entirely belongs to
a single PE. Initially, the approximation sub-band is split into equal slices. Each slice is assigned
to a single PE. On each PE, the (local) lists LIS and LIP are lled with all coe-cients from the
PE's (local) slice, and a separator is appended to the end of each list. From here on, the sequential
algorithm is performed locally with one exception: Each time the iterator meets a separator, the
separator is copied to the end of each destination list. A destination list is a list into which
entries are potentially inserted during the current list processing (see Figure 3(b)). Applying this
principle, the lists L i on PE i are split by separators into parts L ij such that the assembled list
is identical to the list the sequential algorithm would produce. The same
is true for the bit-stream. This enables the parallel algorithm to assemble the bit-stream correctly
after each PE has encoded its part of the wavelet coe-cients.
An important question is when the processing of a list (i.e. a renement step) is completed.
Essentially, the procedure can stop if it has processed the last non-separator entry in the list.
Unfortunately, this does not guarantee that each PE produces the same number of separators.
However, this is a necessary condition for the correctness of the parallel algorithm because, other-
wise, the correct order of the list-parts would be lost. Therefore, the global maximum number of
separators has to be calculated (which unfortunately synchronises the PEs), and the lists have to
be lled up with separators before the algorithm continues with the next renement step (in fact
even before each processing of a list).
As a matter of fact, the number of separators grows exponentially with the number of renement
steps, and very often separators appear in a row together in the list. To avoid unnecessary memory
demands, consecutive separators should be kept together in a single entry associated with a counter.
This means, if a separator entry (containing a counter) is inserted at the end of a list where a
separator entry (also containing a counter) is already sitting, their counters can simply be added.
2.2.2 Algorithm Termination
Another problem is the termination of the whole algorithm. In the sequential case, the process
terminates when the required number of bits have been written to the bit-stream. In the parallel
case, this test (as it is a global test) can only be executed at the end of a renement step. Thus,
the parallel algorithm potentially generates too much bits which of course decreases the speedup
in inconvenient cases. If necessary, super
uous bits can simply be cut after assembling the
bit-stream (due to the nature of the SPIHT algorithm).
The procedure of assembling the bit-stream (after collecting the PE-local bit-streams) is the
only sequential part of the algorithm. Unfortunately, it gets more complicated and, therefore, consumes
more calculation time when the number of PEs is increased. The result will be a signicant
decrease in speedup.
2.2.3 Parallel SPIHT Decompression
Note that for the reverse algorithm { the reconstruction of video data from the bit-stream { the
methods described above are not applicable. Although it is not part of this work, we can shortly
outline the ideas how to implement a parallel SPIHT decoder: First of all, the whole bit-stream
has to be copied to all PEs. Also, each PE has to process all bits of the bit-stream, independent
of whether they belong to local coe-cients. Therefore, the global lists have to be kept at each PE.
The only speedup potentials are:
Not adjusting non-local coe-cients which involves
oating point operations.
threshold
_
_
desc(p) off(p)
Figure
5: Predicates used in the algorithm
Keeping consecutive non-local list entries (entries belonging to non-local coe-cients) together
in a single entry associated with a counter (similar to separators). This is possible because
position information is not needed for non-local entries.
Although this seems to be a very simple approach, it does not imply the necessity of process
synchronisation and it does not contain a sequential part.
3 Parallelisation with Algorithm Alteration
The approach described above reveals some drawbacks as e.g. complicated bit-stream handling,
additional communication needs and non-neglectable sequential code parts. This is a direct consequence
of the fact that the SPIHT algorithm is inherently sequential.
Therefore, we will modify the sequential algorithm itself. Although the resulting bit-stream
will not be compatible with SPIHT, the parallelisation of the altered algorithm will, of course,
be compatible with its sequential version. The basic idea is to substitute the lists of coe-cient
positions involved in the algorithm by bitmaps indicating the membership of each coe-cient to a
certain list. As a result, list iteration, which is used frequently to process the list entries, is turned
into a normal scan of coe-cients that follows a certain spacial direction. Thus, the data driven
parallelisation can be performed more easily by a loop parallelisation of the coe-cient scan.
3.1 Zero-Tree Compression with Signicance Maps (SM)
In the following, we will use three logical predicates A(p), B(p) and C(p) which are dened as
in
Figure
5. A(p) simply denotes the signicance of the coe-cient p. B(p) is true if and only if
at least one of p's descendants is signicant, while C(p) denotes the same but does not include
the direct ospring of p. This is visualised in Figure 5. The state of signicance of a given set
of coe-cients can be described by these predicates in terms of zero-trees. Corresponding to these
predicates, we will use the mappings a, b and c which essentially represent the same as A, B and C.
The dierence is that A, B and C immediately change their values if the threshold is changed and
are, therefore, implemented as a function/procedure in the used programming language. a, b and
c have to be updated explicitly and are, therefore, implemented as array of Boolean values. We
will call a, b and c \signicance maps" (SM). They substitute the lists of coe-cients (see section
2.1).
The algorithm is responsible for the equality of a, b, c and A, B, C respectively while the
ProcessAll :=
threshold
set a, b and c to all false
for each renement step
threshold threshold =2
for p in approximation-subband
c) :=
if a p then Rene(p) else a p A(p)
if
for q in o(p)
Figure
based Zero-tree coding algorithm
threshold is successively decreased by a factor of 1
2 . This should be done by avoiding the evaluation
of A, B and C as far as possible because { following the idea of the SPIHT algorithm { the result
of each evaluation will be encoded into the bit-stream as one bit to allow the decoder to reproduce
the decisions the encoder has made.
The algorithm that obeys these rules is shown in Figure 6. The outer loop is the renement
loop which divides the threshold by 2 in each iteration. This is exactly the same as in the original
algorithm. Within this loop, the algorithm navigates through the set of coe-cients along
trees of coe-cients in a depth-rst manner (this is the major dierence between the SM based
algorithm and SPIHT) starting at the set of coe-cients contained in the approximation sub-band.
This is accomplished by the recursive procedure \ProcessCoe".
For each coe-cient p, the state of a, b and c is checked one after another: a p A(p) has to
be evaluated only if a p is false because the transition true 7! false is not possible for a p . If A(p) is
true then the sign of p has to be encoded as well. If a p is already true the procedure \Rene" is
called which en/decodes another bit of the coe-cients value to rene the decoded approximation.
p B(p) has to be evaluated only if, again, b p is false and ^
C(p) has to be evaluated only if c p is false and b p is true because
last, the recursion to the child coe-cients only has to be performed if b p is true
(for an obvious reason).
The decoding algorithm looks exactly the same. The only dierence is that instead of encoding
the results of the evaluation of A, B and C, this information is read from the bit-stream. Together
with the sign- and renement-bits, this is enough information to enable the decoder to perform
the same steps as the encoder and approximate the coe-cients with an error below the threshold.
Note that this algorithm encodes the same information (in fact the same bits) as the SPIHT
algorithm. The order of the bits is the only dierence. This means that at the end of each
renement step, the compression performance is equal to that of SPIHT. In between, the order of
the bits that are written to the bit-stream is crucial because the bits can have dierent eect on the
decoded image. Thus, it is important to encode bits with greater eect rst. Figure 7 shows the
comparison of the PSNR-performance for the well-known \Lena"-image (2-D case). The algorithm
shows major drawbacks with respect to the original SPIHT which, nevertheless, can almost be
overcome by scanning the set of coe-cients in several passes. The rst pass should only process
those coe-cients that are not part of zero-trees but not signicant. Subsequent passes check the
state of zero-tree roots and process those coe-cients that emerge from decomposed zero-trees. This
method is denoted \sophisticated" in Figure 7. Nevertheless, this improvement of the algorithm is
not used in this work in parallelisation investigations.
PSNR
bpp
SM simple
sophisticated
Figure
7: PSNR of the SM based algorithm compared to the original SPIHT
3.2 Parallelisation
In contrast to the original SPIHT algorithm, the parallelisation of the SM based algorithm is easy.
Again, it is based on the fact that after the parallel wavelet transform, data is distributed in a way
so that zero-trees are local objects (see Figure 2).
All we have to do is to parallelise the inner loop in the procedure ProcessAll (which reads \for p
in approximation-subband") according to the data distribution of the approximation sub-band (see
Figure
1). All other computations within a renement step are localised, i.e. PE-local computations
do not depend on data of neighbouring PEs. So, no communication is required within a renement
step.
Each PE produces one continuous part of the bit-stream for each renement step. At the end,
these parts have to be collected by a single PE and assembled properly (i.e. in an alternating
way). As in the direct SPIHT parallelisation, this is a major bottleneck. However, the number of
bit-stream parts is reduced signicantly, which should speed up the bit-stream assembly.
Because of the termination problem (see section 2.2) the PEs again have to synchronise at the
end of each renement step to determine if the global number of bits produced so far is su-cient.
However, this synchronisation can be dropped if the termination condition is not the bit-stream
size but a xed number of renement steps.
4 Experimental Results
Experimental results were conducted on a Cray T3E-900/LC at the Edinburgh Parallel Computing
Centre using MPI. Video data size is always 864 frames with 88 by 72 pixels. The video sequence
used here is the U-part of \grandma". The wavelet transform is performed up to a level of 3.
Note that the data size is limited by memory constraints in the case when the number of PEs
is 1 and a single PE has to hold all video data. The number of frames has to be high to enable
uniform data distribution for parallelisation as well as down-scaling for the wavelet transform.
output bpp
overall
coding
Figure
8: Sequential speedup of the SM based algorithm with respect to SPIHT
Thus, the frame size is small. In a real-world scenario, however, the frame size can be bigger. For
the same reason, frame size scalability is di-cult to measure. Nevertheless, linear scalability is
assumed due to the authors experiences and the fact that the execution time of the coding part
does not depend on the video data size but on the number of output bits only.
First of all, we have to look at the sequential performance of the SM based algorithm because if
it was slower than the original SPIHT, its parallelisation would not make sense. However, Figure
8 shows that it outperforms the original SPIHT especially for higher bit-rates. On the other hand,
this means that it is even harder to get reasonable speedups.
Figure
9 shows speedups for a xed compression rate: 0:14 bpp (bits per pixel, pixels in dierent
frames are counted as dierent pixels). The fact that the speedup curves are not smooth, i.e. have
discontinuities at and 54, is caused by the divisibility of 864 { the length of the
video sequence which determines the size of the local data sub-sets. Note that due to the depth
of the wavelet transform (3), this size is divided by 8 and the resulting number is then divided by
the number of PEs which is not always possible without remainder.
The sequential bit-stream assembly takes more and more execution time for higher numbers of
PEs. Its share in execution time gets higher than 50% of the coding part. Note at this point that
in a particular hardware implementation, the bit-stream assembly can be integrated in the output
module and separated from the actual coding.
The speedups of the two dierent algorithms are about the same. This shows that the complicated
bit-stream assembly - which is the main problem of a direct SPIHT parallelisation - could
be solved e-ciently. Some dominant problems, such as PE synchronisation and sequential code
parts, are present in both approaches. However, the SM based algorithm is expected to gain a
lower parallelisation e-ciency for two reasons:
It owns the same communication overhead while the sequential algorithm is faster.
There is less potential for positive caching eects in the parallelisation because optimal cache
utilisation (due to the spacially oriented coe-cient scan) is supposed to be the main reason
that the SM based algorithm is faster.
overall
spiht
(a)
overall
coding
(b) SM based algorithm
Figure
9: Speedups for varying #PE and xed compression rate (0.14 bpp).
The fact that the parallelisation e-ciency is about the same strongly suggests that the bit-stream
assembly in the parallel SM based algorithm is more e-cient.
Figure
shows speedup curves for xed #PE and varying compression rate. Of course, the
execution time of the wavelet decomposition does not depend on the compression rate. The reason
for the speedup breakdowns at certain compression rates is the termination problem of the parallel
algorithm (see section 2.2.2). The parallel algorithm is optimal only at compression rates achieved
at the end of a renement step.
Note that although the speedup of the coding part increases with the bit-rate, the overall
speedup remains constant or drops slightly because the share in execution time of the coding part
increases with the bit-rate.
The problem of unevenly distributed complexity is illustrated in Figure 11. Here, approximately
the rst half of the video sequence is substituted by the \car-phone" sequence which
contains much more motion than the \grandma" sequence. This causes bigger coe-cient values at
higher frequency sub-bands for the more complex video parts. Thus, more coe-cients have to be
processed within a renement step, which makes the algorithm consume more computation time.
A load balancing problem is the consequence. One can clearly see that the necessity of process
synchronisation at several points in the algorithm leads to an increase of idle times of PEs waiting
for other PEs. Figure 12 shows the speedups that are { compared to Figure 9 { slightly reduced
because of this problem.
Conclusions
We have seen how an inherently sequential zero-tree coding algorithm can be parallelised. Although
the speedups are not overwhelming, the presented way of parallelisation prevents the necessity to
perform the coding sequentially. Thus, reasonable speedups are possible for higher numbers of
processing elements and the whole range of compression rates.
There are two methods for parallelisation: Either by the use of so called separators or by
output bpp
decomposition
overall
spiht
(a) SPIHT, 8 PEs26100.004
output bpp
decomposition
overall
coding
(b) SM based, 8 PEs5150.004
output bpp
decomposition
overall
spiht
(c) SPIHT,
output bpp
decomposition
overall
coding
(d) SM based,
output bpp
decomposition
overall
spiht
output bpp
decomposition
overall
coding
Figure
10: Speedup for decomposition, coding and overall speedup for varying compression rate.
(a) evenly distributed complexity13579
(b) unevenly distributed complexity
Figure
Execution scheme of decomposition and SPIHT coding for 10 PEs. Time (on horizontal
axis) is measured in milliseconds. (Nearly) vertical black lines indicate data transfer. Horizontal
grey bars indicate calculation phases.515253510 20
overall
spiht
Figure
12: Speedups for video with unevenly distributed complexity (at xed compression rate
(0.14 bpp)).
rewriting the algorithm to t better into the parallel architecture (signicance map based algo-
rithm). The rst method is more complicated but guarantees compatibility with original SPIHT
bit-streams. The second method shows very similar speedup results but better execution times.
Although unevenly distributed image/motion complexity can decrease the speedup potential,
this eect seems to keep within limits.
Acknowledgments
The authors would like to acknowledge the support of the European Commission through TMR
grant number ERB FMGE CT950051 (the TRACS Programme at EPCC). The author was also
supported by the Austrian Science Fund FWF, project no. P13903.
--R
Image coding using parallel implementations of the embedded zerotree wavelet algorithm.
On the scalability of 2D discrete wavelet transform algo- rithms
An embedded wavelet video coder using three-dimensional set partitioning in hierarchical trees (SPIHT)
Parallel algorithm for the two-dimensional discrete wavelet transform
Hardware and software aspects for 3-D wavelet decomposition on shared memory MIMD computers
Optimization of 3-d wavelet decomposition on multiprocessors
Parallelization of the 2D fast wavelet transform with a space- lling curve image scan
Video compression using 3D wavelet transforms.
Listless zerotree coding for color images.
3d listless zerotree coding for low bit rate video.
Scalability of 2-D wavelet transform algorithms: analytical and experimental results on coarse-grain parallel computers
Vector and parallel implementations of the wavelet transform.
Multirate 3-D subband coding of video
image compression without lists.
Parallel discrete wavelet transform on the Paragon MIMD machine.
--TR
On the Scalability of 2-D Discrete Wavelet Transform Algorithms
Hardware and Software Aspects for 3-D Wavelet Decomposition on Shared Memory MIMD Computers
An Embedded Wavelet Video Coder Using Three-Dimensional Set Partitioning in Hierarchical Trees (SPIHT)
--CTR
Roland Norcen , Andreas Uhl, High performance JPEG 2000 and MPEG-4 VTC on SMPs using OpenMP, Parallel Computing, v.31 n.10-12, p.1082-1098, October - December 2005 | video coding;wavelets;zerotree;MIMD |
611451 | Parallel computation of pseudospectra by fast descent. | The pseudospectrum descent method (PsDM) is proposed, a new parallel method for the computation of pseudospectra. The idea behind the method is to use points from an already existing pseudospectrum level curve to generate in parallel the points of a new level curve such that > . This process can be continued for several steps to approximate several pseudospectrum level curves lying inside the original curve. It is showed via theoretical analysis and experimental evidence that PsDM is embarrassingly parallel, like GRID, and that it adjusts to the geometric characteristics of the pseudospectrum; in particular it captures disconnected components. Results obtained on a parallel system using MPI validate the theoretical analysis and demonstrate interesting load-balancing issues. | a region of the complex plane and ii) compute min (zI A) for every node
zh . The boundary curves @ (A) are obtained as the contour plots of the
smallest singular values on the
mesh
h . The obvious advantage of GRID is its
straightforward simplicity and robustness. If we let for the moment C min be
This work has been partially supported by the Greek General Secretariat for Research
and Development, Project ENE 99-07
a measure of the average cost for the computation of min (zI A), it is easily
seen that the total cost of GRID can be approximated by
where
denotes the number of nodes
of
h . The total cost quickly becomes
prohibitive with the increase of either the number of nodes or the size of A.
It has been observed that the cost formula (2) readily indicates two major
methods for accelerating the computation: a) reducing the number of nodes z
and hence the number of evaluations of min , and b) reducing the cost of each
evaluation of min (zI A). Both approaches are the subject of active research;
see [15] for a comprehensive survey of recent eorts.
The use of path following, a powerful tool in many areas of applied mathe-
matics, in order to compute a single boundary curve @ (A) was suggested by
Kostin in [8]. It was M. Bruhl in [5] who presented an algorithm to that end
and showed that signicant savings can be achieved compared to GRID when
seeking a small number of boundary curves. The key is that tracing a single
boundary curve, drastically reduces the number of min evaluations. Bekas and
Gallopoulos carried this work further in Cobra [2], a method that ameliorated
two weaknesses of the original path following method, in particular i) its lack of
large-grain parallelism and ii) its frequent failure near sharp turns or neighboring
curves. In particular, parallelism was introduced by incorporating multiple,
corrections. More recently, D. Mezher and B. Philippe suggested
PAT, a new path following method that reliably traces contours [13]. Despite
its advantages over the original method of [5], Cobra maintains two
of the path following approach vis-a-vis GRID. These are a) that in each run, a
single boundary curve @ (A) is computed, b) that is closed. Therefore, only
one curve is computed at a time and its disconnected components are not cap-
tured, at least in a single run. While we can consider the concurrent application
of multiple path following procedures to remedy these problems, the solution is
less straightforward than it sounds, especially for (b). In the remainder of this
paper, when our results do not depend on the exact version of path following
that we choose to use, we denote these methods collectively by PF.
In this paper we propose the Pseudospectrum Descent Method (PsDM) that
takes an approach akin to PF but results in a set of points dening pseu-
dospectrum boundaries for several values of . PsDM starts from an initial
contour @ (A), approximated by N points z k computed by a PF method.
These points are corrected towards directions of steepest descent, to N points
. The process can be repeated recursively, without the need
to reuse PF, and computes pseudospectra contours. PsDM can be viewed as a
\dynamic" version of GRID where the stride from mesh point to mesh point has
been replaced by a PF Prediction-Correction step. To illustrate this fact and
thus provide the reader with an immediate feeling of the type of information
obtained using the method, we show in Figure 1 the results from the application
of PsDM to matrix kahan of order 100 which has been obtained from the
Test Matrix Toolbox ([7]) and is a typical example of matrices with interesting
Y
Figure
1: Pseudospectrum contours and trajectories of points computed by PsDM
for @ (A); kahan of order 100. Arrows show the
directions used in preparing the outermost curve with path following and the
directions used in marching from the outer to the inner curves with PsDM. See
section 4 for further results with this matrix.
pseudospectra. The plot shows i) the trajectories of the points undergoing the
steepest descent and ii) the corresponding level curves. The intersections are
the actual points computed by PsDM.
We note that the idea for plotting the pseudospectrum using descent owes
to an original idea of I. Koutis for the parallel computation of eigenvalues using
descent described in [3].
The rest of this paper is organized as follows. Section 2 brie
y reviews path
following. Section 3 describes PsDM. Section 4 illustrates the characteristics
of PsDM as well as its parallel performance using an MPI implementation. We
also introduce adaptivity and show that the method can reveal disconnected
components. Section 5 presents our conclusions.
2 Review of Path Following
Consider the function G
According to denition of the pseudospectrum, the zeros of G(z) are points of
the boundary @ (A). Allgower and Georg in [1] describe a generic procedure
Figure
2: A generic Prediction-Correction PF scheme.
to numerically trace solution curves of equations such as (3). In Table 1 we
outline the algorithm and illustrate one step in Figure 2. See [2, 5] for more
details.
z 0 on @ (A).
for k=1,.
(* Prediction phase *)
1.1 Determine a prediction direction p k .
1.2 Choose a steplength h and predict the point ~
(* Correction phase *)
2.1 Determine a correction direction c k .
2.2 Correct ~ z k to z k using Newton iteration.
Table
1: PF generic procedure for the computation of a single contour.
In the sequel, as in [5], we would be identifying the complex plane C with
R 2 and frequently use the notation G(z) for G(x; y). Critical to the eective use
of PF is the availability of the gradient rG(x; y); that this becomes available at
little cost follows from the following important result (cf. [5, 6]):
Theorem 2.1 Let z 2 C n(A). Then G(x,y) is real analytic in a neighborhood
of is a simple singular value. Then the gradient of
G at z is
min u min ); =(v
min u min )): (4)
where denote the left and right singular vectors corresponding to
min (zI A).
Therefore, assuming that the minimum singular triplet
A (in the sequel we would be referring to it simply as \the triplet at z") has
been computed, the gradient is available with one inner product.
From the above it follows that the dominant eort in one step of the PF
scheme proposed in [5] amounts to i) the estimation of the prediction direction
and ii) the Newton iteration of the correction procedure. Regarding (ii) we
note that it is su-cient (cf. [5, 2]) to use a single Newton step, requiring one
triplet evaluation at ~
z k . Then, as proposed in [5], ~
z k is corrected towards the
direction of steepest descent c k , i.e.
z k
min
where the triplet ( min ; u min ; v min ) is associated with ~
z k . Regarding (i), note
that selecting p k to be tangential to the curve at z k 1 requires the triplet at
z k 1 . Since this would double the cost, it has been shown acceptable to take p k
orthogonal to the previous correction direction c k 1 ; cf. [5, 2]. Therefore the
overall cost of a single step of the original method of [5] is approximately equal
to the cost for computing the triplet.
3 The Pseudospectrum Descent Method
Let us now assume that an initial contour @ (A) is available in the form of
a some approximation (e.g. piecewise linear) based on N points z k previously
computed using some version of PF. In order to obtain a new set of points that
dene an inner level curve we proceed in two steps:
1: Start from z k and compute an intermediate point ~
w k by a single modied
Newton step towards a steepest descent direction d k obtained earlier.
Step 2: Correct ~
w k to w k using a Newton step along the direction l k of steepest
descent at ~
Figure
3 illustrates the basic idea for a single initial point. Applying one Newton
step at z k would require r min ((x k therefore, a triplet evaluation
at z k . To avoid this extra cost, we apply the same idea as PF and use instead
min q min ), that is already available from
the original path following procedure. In essence we approximated the gradient
based at z k with the gradient based at ~ z k . Vectors g min ; q min are the right and
left singular vectors associated with min (~z k I A). Applying correction as in
(5) it follows that
~
min g min
Figure
3: Computing @ - (A); - < .
is the new pseudospectrum boundary we are seeking.
Once we have computed ~
we perform a second Newton step that yields w
min ( ~
where the triplet used is associated with ~
w k . These steps can be applied to all N
points in what we call one sweep of PsDM; we denote it by PsDM and outline it
in the following Table. Starting from an initial contour @ (A) we have shown
of the initial contour @ (A).
on the target contour @ - (A); - < .
for k=1,. ,N
1. Compute the intermediate point ~
according to (6).
2. Compute the target point w k using (7).
Table
2: PsDM: One sweep of PsDM.
how to compute points that approximate a nearby contour @ - (A); - < .
Assume now that the new points computed with one sweep of PsDM dene
satisfactory approximations of @ - (A) (cf. end of section). We ask whether it
would it be practical to use these points to march one further step to approximate
another
As noted in the previous discussion, the
application of the sweep PsDM uses r min ((~x k i.e. the triplet at
~
z k . This is readily available when the curve @ (A) is obtained via PF. Observe
now that as the sweep proceeds to compute @ ~ - (A) from @ - (A), it also computes
the triplet at ~
w k . Therefore enough derivative information is available
for the sweep PsDM to proceed one more step with starting points computed
Method PsDM
approximating a contour @ (A).
approximating M contours
for i=1,. ,M
Compute points of @ - i by PsDM on the N points of @ - i 1
Table
3: The PsDM method.
via the previous application of PsDM. Therefore, it is not necessary to run PF
again. Continuing with this repeated application of sweeps PsDM, we obtain the
promised pseudospectrum descent method, outlined in Table 3 and illustrated
in
Figure
4.
It is worth noting that each sweep can be viewed as a map that takes as
input N in points approximating - i (A) and produces N out points approximating
In the description so far N in = N out , but as we show in
Section 4.4, this is not necessarily an optimal strategy.
We nally note that the two step process described above was found to be
necessary. By contrast, the more straightforward approach in which compute the
exact steepest descent directions from each point at the starting curve and use
those values directly to approximate the next set of points in one step required
a much smaller stepsize to be successful. We did not experiment further with
this procedure.
Cost considerations It is evident that the cost for the computations of the
intermediate points ~
relatively small since the derivatives
at ~ z have already been computed by PF or a previous sweep of
PsDM. Furthermore, we have assumed that min (z k I
the points z k approximate @ (A). On the other hand, computing the nal
points evaluations. To avoid proliferation
of symbols, we use C min to now denote the average cost for computing the
triplet. Therefore, we approximate the cost of a single sweep of PsDM by C
NC min . The computation of each target point
from the computation of all other target points w k. On a system with P
processors, we can assign the computation of at most dN=P e target points to
each processor; one sweep will then proceed with no need for synchronization
and communication and its total cost is approximated by C
We discuss this issue in more detail in Section 4.3. We emphasize that typically,
the number of points N on each curve is expected to be large, therefore the
algorithm is scalable. This is better than the PF methods described in the
introduction: Cobra allows only a moderate number of independent calculations
of min per step while in the original version of PF, coarse-grain parallelism
Figure
4: Descent process for a single point
becomes available only if we attempt to compute dierent curves in parallel.
analysis for a single sweep of PsDM In order to gauge the quality of
the approximation of the curve obtained in a single sweep of PsDM, it is natural to
use as a measure the value which we call \stepsize of the sweep". Let
be the point approximated starting from a point z 2 @ - i (A).
Dene the function G 0
be the point obtained using a single Newton step at ~
as depicted in Figure
real analytic in a domain D if the minimum singular value
min (wI A) is simple for all w 2 D. Let all ve points ~ z; z; ~
Fig. 5) lie in the interior of such a domain. From the relations
z
and ~
z
it follows that w 0 can also be considered as the outcome of two exact Newton
steps originating from ~ z. From the identity (w 0 I
and standard singular value inequalities, it follows that
min (w 0 I
and therefore From results of
Levin and Ben-Israel (see [12]) regarding Newton's method for underdetermined
systems, under standard assumptions there will be a region in which the Newton
iteration used to produce w 0 from ~
w will converge quadratically; we next assume
that we are within this region. Then jw 0 wj 1 jw 0 ~
w y )k); note that 1=krG 0 ( ~
w y )k is at least 1, since G 0 ( ~
This gure can be considered as an enlargement of Figure 3 that reveals that the point
computed with a single Newton step in the sweep produces only an approximation of w k .
Figure
5: The transformation of z to w 0 via PsDM.
is the inner product of singular vectors (cf. Theorem 2.1). Therefore, using
relations (7) and (9) it follows that
wI A). It follows that
if this minimum is not much smaller than 1 and the assumptions made above
hold, then the error induced by one sweep of PsDM will be bounded
by a moderate multiple of the square of the stepsize of the sweep. A full scale
analysis of the global error, for multiple sweeps of PsDM, is the subject of current
work.
4 Renements and numerical experiments
We conducted experiments with matrices that have been used in the literature
to benchmark pseudospectra algorithms. Our results, presented below, indicate
that PsDM returns the same levels of accuracy as GRID at a fraction of the cost.
We then describe a parallel implementation of PsDM. The speedups obtained underscore
the parallel nature of the algorithm. We also show that the application
of PsDM to large matrices for which the triplets have to be computed via some
iterative method is likely to suer from load imbalance and speedups that are
than what one would expect from an embarassingly parallel algorithm;
we describe a simple heuristic to address this problem.
Finally we discuss some other properties that underline the
exibility of
PsDM. We show in particular that: i) adaptation of the number of points computed
in each sweep of PsDM can lead to signicant cost savings, and ii) PsDM
can capture disconnected components of the pseudospectrum lying inside the
initial boundary computed via PF.
4.1 System conguration
We performed our experiments on a SGI Origin 2000 system with 8 MIPS
R10000 processors. The system had a total of 768 MB RAM and 1MB cache
per processor, running IRIX 6.5. The codes were written in Fortran-90 using
F90/77 MIPSpro version 7.2.1 compilers. For the parallelization we used the
MPI programming paradigm, implemented by SGI's MPT 1.2.1.0. We used
ARPACK [11] to approximate the triplets and SPARSKIT [14], suitably modied to
handle double precision complex arithmetic, for sparse matrix{vector multiplies.
All our experiments were conducted in single-user mode.
4.2 Numerical experiments with PsDM
We remind the reader that for matrices with real elements, the pseudospectrum
curves are symmetric with respect to the real axis, and it has become standard
in the pseudospectrum literature to measure the success of methods for pseudospectra
by direct comparison of the one half of the gure computed with a
new method and the other half with GRID. Therefore, in subsequent experiments
with any method we only compute points and curves lying on the upper or lower
half of the complex plane. We start with the same matrix used to obtain Figure
1, that is kahan of order 100. We used the parallel version of PF, namely Cobra,
to obtain points to approximate (one half of) the pseudospectrum curve
corresponding to asked PsDM to compute 60 contours corresponding
to values of ranging from - using a
stepsize that remained equal
6. The upper half of gure 6 illustrates the contours
corresponding to while the the lower half illustrates the
same contours computed using GRID on a 100 100 mesh of equidistant points.
We chose this resolution in order to allow GRID to oer a level of detail that is
between the minimum and maximum resolution oered by PsDM. In particular,
the distance between neighboring points of GRID was selected near the median of
the smallest and largest distance of neighboring points computed in the course
of PsDM. The pictures are virtually indistinguishable and indicate that PsDM
achieves an accuracy comparable to GRID. On the other hand, the cost is much
smaller; in particular PsDM approximates the contours using 2700 points while
GRID used 10000 mesh points. It is also worth noting that despite the fact that
GRID does not require the computation of singular vectors, in the context of
iterative methods such as ARPACK the extra cost is not signicant and PsDM is
expected to be far less expensive.
In order to further analyze the accuracy of PsDM, we computed the relative
error jmin
at each computed point z of the (approximate) curves
Y
PsDM
GRID
Figure
Selected pseudospectrum contours @ (A);
kahan of order 100 computed by PsDM (top) and GRID (bottom).
produced by the algorithm; from these values, we obtained the maximum
and mean relative errors for each contour point and show the results in
Figure
7. Note that the best way to read this gure is from left to right, as
this shows how the maximum and mean errors per curve develop as we consider
the curves from the outermost to the innermost. The maximum relative
error appears to be satisfactory except in a few cases, where the maximum error
even then, however, the mean error remains two orders
of magnitude smaller. This highlights one observation we made in our experi-
ments, namely that only a very limited number of points suer from increased
relative error. We also note that there is an apparent increase in the error as we
approach very small values of . This does not re
ect a weakness of PsDM, but
the di-culty of the underlying SVD method, ARPACK in this case, to compute
approximations of the minimum singular value with very small relative error.
4.3 Parallel performance
One important advantage of PsDM is that, by construction, it is embarrassingly
parallel: Each sweep can be split into a number of tasks equal to the number of
points it handles and each task can proceed independently with its work, most
of it being triplet computations. Furthermore, assuming that no adaptation is
used, no communication is needed between sweeps. Let us assume that this is
the case and that the number of points computed in each sweep and therefore the
number of tasks is constant, say N . It is thus natural to use static partitioning
e
Relative
error
Maximum relative error
Mean relative error
Figure
7: Maximum and mean relative errors for each curve (60 total) computed
via PsDM for
(*Input*) N points z k on the starting number of processors
1. For each processor, select and assign the sweep computations for
dN=P e points.
2. Each processor proceeds with PsDM on its own set of points.
3. A marked processor gathers the results of all processors.
Table
4: Parallel PsDM with static partitioning.
of tasks, allocating to each of the P processor those tasks handling the sweep for
approximately dN=P e points. Table 4 outlines the method. The next question
is how to allocate tasks to processors. One natural idea is to use \static block
partitioning" and allocate to each processor the computations corresponding to
dN=P e consecutive points, say z
We applied PsDM on matrix kahan(100) from Section 4.2, starting from
points on the initial curve @ 0:1 (A) and computed
curves. Therefore, the total number of points that are computed by the end of
the run was In these and subsequent performance results we
did not take into account the time taken by Cobra to approximate the initial
curve.
Table
5 depicts the corresponding execution times and speedups. The
improvements are sunstantial and the speedups re
ect the parallel nature of the
# of Processors
Time (secs) 264 135 70 36
Table
5: Performance of parallel implementation of PsDM for kahan(100)).
Table
Number of points computed by each processor by the time the rst
processor nishes its share of the workload under static block partitioning.
processors are used to compute curves for kahan(100).
Numbers in boldface denote the number of points computed by the processor
that nished rst.
algorithm. In order to better understand the eect of the static task allocation
policy, we marked which processor would nish rst and then examined how
much work had been accomplished by then in each of the remaining processors.
The results are shown in Table 6, where in boldface are shown the number of
points that have been completed by the processor that nished rst. Line 1, for
instance, shows that processor 0 nished rst (the number 600 is in boldface)
while, by that time, processor 1 had accomplished the computation of 591 points.
Since each processor had to deal with this meant that
processor 1 still had to accomplish work for 9 points. As each row shows, the
work is reasonably well balanced, thus justifying the good speedups reported in
Table
5.
We next applied PsDM on matrix gre 1107 (1107 1107 sparse, real and
unsymmetric) from the Harwell-Boeing collection. The initial curve @ 0:1 (A)
was approximated by 64 points computed by Cobra. We computed 20 pseu-
dospectrum curves corresponding to log therefore each
processor was allocated d64 20=P e consecutive points. Rows 3 and 4 of Table
7 depict the times and speedups. Even though computing time is reduced, the
speedups are far inferior than those reported for kahan in Table 5. To explain
this phenomenon, we prepared for gre 1107 a table similar to Table 6. Results
are tabulated in rows 3-5 of Table 8 and reveal severe load imbalance. For
example, notice that when using 2 processors in PsDM, (2nd row of Table 8),
by the time processor 1 has nished with all its allocated points, processor 0
has nished with only 75% them. Similar patterns hold when using 4 and 8
processors. This load imbalance is due to the varying level of di-culty that an
iterative method, such as ARPACK, has when computing the triplet corresponding
to min (zI A) as z moves from point to point.
Two ways to resolve this problem are a) a system-level approach to dispatch
Static block partitioning
Time (secs) 6000 3750 2500 1390
4.3
Static cyclic partitioning
Time (secs) 6000 3060 1560 840
Table
7: Performance of parallel implementation of PsDM using static block and
cyclic partitionings for gre 1107.
proc. / proc. id.
Static block partitioning
Static cyclic partitioning
Table
8: As Table 6 for matrix gre 1107 using static block and cyclic partitionings
to compute 20 pseudospectrum cuves.
tasks to the processors from a queue as processors are freed, and/or b) a problem-
level approach in which we estimate the work involved in each task and partition
the tasks so as to achieve acceptable load balance. The former approach has the
potential for better load balance, at additional system-level overhead; the latter
approach has little overhead and is simpler to implement, but appears to require
a priori estimates for the workload of each task. Even though this is di-cult
to know beforehand when using iterative methods, it helps to note that when
interested in load balancing, what is important is not information regarding the
amount of time taken by each task, but information regarding work dierential
between dierent tasks. Based on this idea, we tried the heuristic that the
number of iterations required for a triplet based at z is likely to be similar
for neighboring values of z. It is therefore reasonable to allocate points in an
interleaved fashion. This leads to \static cyclic partitioning", in which processor
are initialized with the points z Therefore, by
interleaving the initial points we aim to shue the \di-cult" points, along the
descent of SpDM and distribute them to all the processors. Rows 6-7 of Table
7 and rows 7-9 of Table 8 depict the speedups and load distribution obtained
from static cyclic partitioning. The improvement in load balance is evident and
leads to much better speedups.
We also experimented with \static block cyclic partitioning" in which points
were partitioned in dN=be blocks of b consecutive points each and then the blocks
were allocated in a cyclic fashion. We experimented with blocks of size 2 and 4
and found that performance was becoming worse with increasing blocksize, and
were all inferior than the static cyclic case.
We nally show, in Figure 8, the contours obtained for gre 1107.
Y
Figure
8: gre 1107. @ (A) contours for log
4.4 Adapting to decreasing contour lengths
It is known that for a given matrix and varying values of , the -pseudospectrum
forms a family of nested sets on the complex plane. Consequently, for any
given and smaller -, the area of - (A) is likely to be smaller than the area
of (A); similarly, the length of the boundary is likely to change; it would
typically become smaller, unless there is separation and creation of disconnected
components whose total perimeter exceeds that of the original curve, in which
case it might increase. Let us assume now that any contour is approximated
by the polygonal path dened by the points computed by PsDM. Then we can
readily compute the lengths of the approximating paths. See for example the
lengths corresponding to dierent values of for matrix kahan in Figure 6 and
the lengths of the corresponding polygonal paths in Table 9 (row 2). It is clear,
log
approx. lengths 4.48 2.63 1.85 1.48 1.29 1.18 1.1
# points 46 29 24 22 22 21 20
Table
9: Approximate contour lengths of the pseudospectra of kahan(100) and
number of points used to approximate each curve following the point reduction
policy described in this section.
in this case, that the polygonal path lengths become smaller as we move inwards.
Nevertheless, in the PsDM algorithm we described so far, the same number of
points (46) was used to approximate both curves.
It would seem appropriate, then, to monitor any signicant length reduction
or increase, and adapt accordingly the number of points computed by PsDM.
In the remainder of this section we examine the case of reduced path lengths,
noting that similar policies could also be adapted if we needed to increase, rather
than reduce, the number of points. The following general scheme describes the
transition from @ -m (A) to @ -m+1 (A). We denote by N -m the points dening
the contour @ -m (A).
1. Exclude K points of @ -m (A) according to a curve length criterion.
2. Proceed to compute the next contour @ -m+1 (A); - m+1 < - m , starting
from the N -m K remaining points of @ -m (A).
The scheme allows one to use a variety of curve length criteria, possibly chosen
dynamically as PsDM proceeds. In Table 10, we show a single step of PsDM
together with the implementation of such a strategy, based on the variation
of the minimum distance between consecutive points on subsequent contours.
Steps 1 to 6 implement the point reduction and also compute the next minimum
distance, l m+1 , between consecutive points of @ -m (A), while step 7 is the standard
sweep described in Table 2. First, the algorithm computes the distances
consecutive points on @ -m (A), and then drops any
point z k from the curve if it nds that d k + d k+1 < 2l m , unless the previous
point z k 1 had already been dropped. In the algorithm presented in Table 10,
this test is implemented by means of the boolean variables k ; k and a boolean
recurrence specied in line 4.1. Our implementation keeps xed the rst and
last points on @ -m (A), though this is easily modied. We also take advantage
of the fact that A is real so that we need to compute only one half of the curve.
Figure
9 illustrates the underlying idea. In the top curve, all points, z k ,
inside a box or circle are candidates for dropping, because they satisfy jz k 1
z (inside circle) are
dropped because they are the only ones for which the preceding point, z k 1 , was
not dropped. The remaining points are shown in the bottom curve, renumbered
to indicate their relative position within the curve.
distance between consecutive points on @ -m 1 (A);
Points
distance between consecutive points on @ -m (A);
Points
1. for
2. for
3.
4. for
4.1
4.2 if (
4.3
5. N
6. l
z
7. Apply single sweep of PsDM on f~z i 2 @ -m (A);
to compute fw g.
Table
10: Single sweep of PsDM with adaptive point reduction.
We implemented PsDM with the point reduction policy described in Table
on our parallel platform. We specically required the parallel version to
produce the same points that would have been produced, had the above policy
ran serially. To achieve this, processors synchronize so that the sweep outlined
in
Table
starts only when all input data is available to all. Due to the low
complexity of this procedure relative to the remaining work, we decided to use
a simple approach in which steps 1 to 6 are executed by a single, master, processor
instead of a more complicated parallel reduction policy. In particular,
processors send back points to the master processor who applies point reduc-
tion. The new points are then ready for allocation to the processors and the
application of PsDM (step 7 of Table 10). The question arises then, how to allocate
these points. As before, we could use a queue or a static approach. Given
the success of the latter in the previous experiments, and its lack of overhead, we
experimented with static approaches described earlier, assigning approximately
e points to each processor.
We rst applied this adaptive version of PsDM on kahan(100). This almost
halved the number of triplet evaluations, reducing them from 2760 to 1466.
Table
9 (3d row) depicts how the number of points varied for selected values
of . The contours computed using this strategy are depicted in Figure 10 vs.
Figure
9: Adaptive point reduction: Inscribed points are candidates for dropping
but only the encircled ones drop successfully.
kahan gre 1107
static block static cyclic static block static cyclic
Table
Execution times in seconds and speedups (in parentheses) for parallel
PsDM with point reduction for matrices kahan(100) (left) and gre 1107 (right).
the same contours computed using PsDM but holding the number of points per
contour constant. It is clear that adaptation is very eective without visibly
aecting the quality of the output. Columns 2 and 3 of Table 11 depict the
execution times and speedups (in parentheses). Once again, static cyclic is
superior to block assignment. Furthermore, the corresponding speedups are
similar, though, naturally, not as high as PsDM without point reduction (cf.
Table
5).
We also performed the same experiment on matrix gre 1107 and show the
times and corresponding speedups in columns 4 and 5 of Table 11. We started
with 74 points on @ 0:1 (A) and computed 20 curves as deep as @ 0:001 (A),
where we concluded with points. The total number of triplet evaluations was
reduced to 1300 compared to that would have been required if no
point reduction had been performed. In this case too, cyclic allocation performs
better than block. Furthermore, the achieved speedups are satisfactory, as they
are very close to those reported for the no drop policy in rows 7-9 of Table 8.
The above discussion shows that PsDM can be enhanced to adapt to the geo-
-0.4
Y
PsDM
REDUCED PsDM
Figure
10: Pseudospectra contours, log
Up: normal PsDM. Down: PsDM with point reduction.
metric features of the contours, in which case there is a corresponding reduction
in cost as we move inwards. It is clear that this idea can serve as springboard
for the design of alternative adaptation strategies.
4.5 Capturing disconnected components
One documented weakness of path following methods is that they cannot readily
capture disconnected components of the pseudospectrum. One question is how
does PsDM handle this di-culty? In this section we show that PsDM manages to
trace disconnected curves, as long as they lie inside the initial curve @ (A).
In that respect, the performance is the same as that of GRID, where the level
curves traced are only those that lie within the area discretized
by
h . Consider
matrix grcar(50) from the Test Matrix Toolbox. We begin from 192 points
proceed with a xed step 0.1 to compute curves. Thus
the inner curve is @ 10 6(A). The pseudospectra for disconnected
components. Figure 11 demonstrates that PsDM manages to retrieve
Y
Figure
Capturing disconnected components for grcar(50). Curves from
outer to inner:
this intricate structure. Of course this procedure does not return components
that were already outside the initial @ (A) curve. If we want to assure that all
components are captured, we can start with an initial curve that is large enough
using techniques such as those presented in [4].
Conclusions
We presented PsDM, a new method for the computation of the pseudospectrum
of a matrix A, that combines the appealing characteristics of the traditional
method GRID and the versatility of path following methods. We saw that PsDM
automatically generates new curves starting, for instance, from one application
of PF. The method is such that the curves adapt to the geometric properties of
the pseudospectrum and is able to capture disconnected components. Experimental
results showed that the method achieves a signicant reduction of the
number of necessary triplet evaluations vs. GRID even though, like GRID, it also
computes the pseudospectrum for several values of . Given the parallel nature
of PsDM, we implemented it using MPI. These experiments also revealed the advantages
of a simple heuristic designed to achieve load balance. An OpenMP
implementation of PsDM is currently underway and is expected to serve as a platform
for the investigation of a variety of alternative policies. It is worth noting
that PsDM computes successively, using path following but in the direction of
steepest descent, points dening the nested curves @ (A). This idea holds great
promise for pseudospectra and eigenvalues as described in [3, 9, 10]. Overall,
we believe that the approach used in PsDM will be useful in the construction of
an adaptive algorithm for the computation of pseudospectra that will be based
on path following. We note that the codes used in this paper are available from
URL http://www.hpclab.ceid.upatras.gr/scgroup/pseudospectra.html.
Acknowledgments
We thank Bernard Philippe, for his astute comments regarding the paper. We also
thank the referees for their valuable suggestions that helped improve the paper as well
as our colleague, E Kokiopoulou, for her comments and support.
--R
Numerical Continuation Methods: An Introduction
Cobra: Parallel path following for computing the matrix pseudospectrum.
Parallel algorithms for the computation of pseudospectra.
Using the
An algorithm for computing the distance to uncontrollability.
The Test Matrix Toolbox for MATLAB (version 3.0).
Iterations on domains for computing the matrix (pseudo) spectrum.
Hermitian methods for computing eigenvalues.
Arpack User's Guide: Solution of Large-Scale Eigenvalue Problems With Implicitly Restarted Arnoldi Methods
Directional Newton methods in n variables.
SPARSKIT: A basic tool-kit for sparse matrix computations (version 2)
Computation of pseudospectra.
--TR
Numerical continuation methods: an introduction
An algorithm for computing the distance to uncontrollability
Directional Newton Methods in n Variables
--CTR
C. Bekas , E. Kokiopoulou , E. Gallopoulos, The design of a distributed MATLAB-based environment for computing pseudospectra, Future Generation Computer Systems, v.21 n.6, p.930-941, June 2005 | pseudospectra;parallel computation;newton's method;ARPACK |